00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1998 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3264 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.105 The recommended git tool is: git 00:00:00.105 using credential 00000000-0000-0000-0000-000000000002 00:00:00.107 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.137 Fetching changes from the remote Git repository 00:00:00.138 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.169 Using shallow fetch with depth 1 00:00:00.169 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.169 > git --version # timeout=10 00:00:00.191 > git --version # 'git version 2.39.2' 00:00:00.191 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.207 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.207 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.721 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.732 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.743 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:07.743 > git config core.sparsecheckout # timeout=10 00:00:07.754 > git read-tree -mu HEAD # timeout=10 00:00:07.768 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:07.784 Commit message: "inventory: add WCP3 to free inventory" 00:00:07.784 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:07.862 [Pipeline] Start of Pipeline 00:00:07.878 [Pipeline] library 00:00:07.879 Loading library shm_lib@master 00:00:07.880 Library shm_lib@master is cached. Copying from home. 00:00:07.895 [Pipeline] node 00:00:07.903 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.906 [Pipeline] { 00:00:07.917 [Pipeline] catchError 00:00:07.919 [Pipeline] { 00:00:07.931 [Pipeline] wrap 00:00:07.940 [Pipeline] { 00:00:07.947 [Pipeline] stage 00:00:07.949 [Pipeline] { (Prologue) 00:00:07.964 [Pipeline] echo 00:00:07.965 Node: VM-host-SM9 00:00:07.971 [Pipeline] cleanWs 00:00:07.979 [WS-CLEANUP] Deleting project workspace... 00:00:07.979 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.985 [WS-CLEANUP] done 00:00:08.174 [Pipeline] setCustomBuildProperty 00:00:08.266 [Pipeline] httpRequest 00:00:08.295 [Pipeline] echo 00:00:08.296 Sorcerer 10.211.164.101 is alive 00:00:08.301 [Pipeline] httpRequest 00:00:08.305 HttpMethod: GET 00:00:08.305 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.306 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.324 Response Code: HTTP/1.1 200 OK 00:00:08.324 Success: Status code 200 is in the accepted range: 200,404 00:00:08.325 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:12.733 [Pipeline] sh 00:00:13.015 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:13.031 [Pipeline] httpRequest 00:00:13.057 [Pipeline] echo 00:00:13.059 Sorcerer 10.211.164.101 is alive 00:00:13.067 [Pipeline] httpRequest 00:00:13.071 HttpMethod: GET 00:00:13.071 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:13.072 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:13.080 Response Code: HTTP/1.1 200 OK 00:00:13.080 Success: Status code 200 is in the accepted range: 200,404 00:00:13.081 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:23.848 [Pipeline] sh 00:01:24.129 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:27.472 [Pipeline] sh 00:01:27.785 + git -C spdk log --oneline -n5 00:01:27.785 719d03c6a sock/uring: only register net impl if supported 00:01:27.785 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:27.785 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:27.785 6c7c1f57e accel: add sequence outstanding stat 00:01:27.785 3bc8e6a26 accel: add utility to put task 00:01:27.809 [Pipeline] withCredentials 00:01:27.820 > git --version # timeout=10 00:01:27.834 > git --version # 'git version 2.39.2' 00:01:27.851 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:27.853 [Pipeline] { 00:01:27.865 [Pipeline] retry 00:01:27.868 [Pipeline] { 00:01:27.890 [Pipeline] sh 00:01:28.174 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:30.084 [Pipeline] } 00:01:30.106 [Pipeline] // retry 00:01:30.112 [Pipeline] } 00:01:30.132 [Pipeline] // withCredentials 00:01:30.142 [Pipeline] httpRequest 00:01:30.164 [Pipeline] echo 00:01:30.166 Sorcerer 10.211.164.101 is alive 00:01:30.175 [Pipeline] httpRequest 00:01:30.179 HttpMethod: GET 00:01:30.179 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:30.180 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:30.196 Response Code: HTTP/1.1 200 OK 00:01:30.197 Success: Status code 200 is in the accepted range: 200,404 00:01:30.197 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:39.335 [Pipeline] sh 00:01:39.614 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:41.526 [Pipeline] sh 00:01:41.803 + git -C dpdk log --oneline -n5 00:01:41.803 caf0f5d395 version: 22.11.4 00:01:41.803 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:41.803 dc9c799c7d vhost: fix missing spinlock unlock 00:01:41.803 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:41.803 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:41.820 [Pipeline] writeFile 00:01:41.833 [Pipeline] sh 00:01:42.109 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:42.117 [Pipeline] sh 00:01:42.389 + cat autorun-spdk.conf 00:01:42.389 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.389 SPDK_TEST_NVMF=1 00:01:42.389 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.389 SPDK_TEST_URING=1 00:01:42.389 SPDK_TEST_USDT=1 00:01:42.389 SPDK_RUN_UBSAN=1 00:01:42.389 NET_TYPE=virt 00:01:42.389 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:42.389 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:42.389 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:42.395 RUN_NIGHTLY=1 00:01:42.397 [Pipeline] } 00:01:42.412 [Pipeline] // stage 00:01:42.424 [Pipeline] stage 00:01:42.426 [Pipeline] { (Run VM) 00:01:42.436 [Pipeline] sh 00:01:42.710 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:42.710 + echo 'Start stage prepare_nvme.sh' 00:01:42.710 Start stage prepare_nvme.sh 00:01:42.710 + [[ -n 5 ]] 00:01:42.710 + disk_prefix=ex5 00:01:42.710 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:42.710 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:42.710 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:42.710 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.710 ++ SPDK_TEST_NVMF=1 00:01:42.710 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.710 ++ SPDK_TEST_URING=1 00:01:42.710 ++ SPDK_TEST_USDT=1 00:01:42.710 ++ SPDK_RUN_UBSAN=1 00:01:42.710 ++ NET_TYPE=virt 00:01:42.710 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:42.710 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:42.710 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:42.710 ++ RUN_NIGHTLY=1 00:01:42.710 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:42.710 + nvme_files=() 00:01:42.710 + declare -A nvme_files 00:01:42.710 + backend_dir=/var/lib/libvirt/images/backends 00:01:42.710 + nvme_files['nvme.img']=5G 00:01:42.710 + nvme_files['nvme-cmb.img']=5G 00:01:42.710 + nvme_files['nvme-multi0.img']=4G 00:01:42.710 + nvme_files['nvme-multi1.img']=4G 00:01:42.710 + nvme_files['nvme-multi2.img']=4G 00:01:42.710 + nvme_files['nvme-openstack.img']=8G 00:01:42.710 + nvme_files['nvme-zns.img']=5G 00:01:42.710 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:42.710 + (( SPDK_TEST_FTL == 1 )) 00:01:42.710 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:42.710 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:42.710 + for nvme in "${!nvme_files[@]}" 00:01:42.711 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:42.711 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:42.711 + for nvme in "${!nvme_files[@]}" 00:01:42.711 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:42.711 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:42.711 + for nvme in "${!nvme_files[@]}" 00:01:42.711 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:42.967 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:42.967 + for nvme in "${!nvme_files[@]}" 00:01:42.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:42.968 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:42.968 + for nvme in "${!nvme_files[@]}" 00:01:42.968 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:43.319 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:43.319 + for nvme in "${!nvme_files[@]}" 00:01:43.319 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:43.319 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:43.319 + for nvme in "${!nvme_files[@]}" 00:01:43.319 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:43.319 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:43.585 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:43.585 + echo 'End stage prepare_nvme.sh' 00:01:43.585 End stage prepare_nvme.sh 00:01:43.608 [Pipeline] sh 00:01:43.882 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:43.882 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:01:43.882 00:01:43.882 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:43.882 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:43.882 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:43.882 HELP=0 00:01:43.882 DRY_RUN=0 00:01:43.882 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:43.882 NVME_DISKS_TYPE=nvme,nvme, 00:01:43.882 NVME_AUTO_CREATE=0 00:01:43.882 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:43.882 NVME_CMB=,, 00:01:43.882 NVME_PMR=,, 00:01:43.882 NVME_ZNS=,, 00:01:43.882 NVME_MS=,, 00:01:43.882 NVME_FDP=,, 00:01:43.882 SPDK_VAGRANT_DISTRO=fedora38 00:01:43.882 SPDK_VAGRANT_VMCPU=10 00:01:43.882 SPDK_VAGRANT_VMRAM=12288 00:01:43.882 SPDK_VAGRANT_PROVIDER=libvirt 00:01:43.882 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:43.882 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:43.882 SPDK_OPENSTACK_NETWORK=0 00:01:43.882 VAGRANT_PACKAGE_BOX=0 00:01:43.882 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:43.882 FORCE_DISTRO=true 00:01:43.882 VAGRANT_BOX_VERSION= 00:01:43.882 EXTRA_VAGRANTFILES= 00:01:43.882 NIC_MODEL=e1000 00:01:43.882 00:01:43.882 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:43.882 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:46.413 Bringing machine 'default' up with 'libvirt' provider... 00:01:46.981 ==> default: Creating image (snapshot of base box volume). 00:01:46.982 ==> default: Creating domain with the following settings... 00:01:46.982 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720849718_10b0934c48e9a05e8dd9 00:01:46.982 ==> default: -- Domain type: kvm 00:01:46.982 ==> default: -- Cpus: 10 00:01:46.982 ==> default: -- Feature: acpi 00:01:46.982 ==> default: -- Feature: apic 00:01:46.982 ==> default: -- Feature: pae 00:01:46.982 ==> default: -- Memory: 12288M 00:01:46.982 ==> default: -- Memory Backing: hugepages: 00:01:46.982 ==> default: -- Management MAC: 00:01:46.982 ==> default: -- Loader: 00:01:46.982 ==> default: -- Nvram: 00:01:46.982 ==> default: -- Base box: spdk/fedora38 00:01:46.982 ==> default: -- Storage pool: default 00:01:46.982 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720849718_10b0934c48e9a05e8dd9.img (20G) 00:01:46.982 ==> default: -- Volume Cache: default 00:01:46.982 ==> default: -- Kernel: 00:01:46.982 ==> default: -- Initrd: 00:01:46.982 ==> default: -- Graphics Type: vnc 00:01:46.982 ==> default: -- Graphics Port: -1 00:01:46.982 ==> default: -- Graphics IP: 127.0.0.1 00:01:46.982 ==> default: -- Graphics Password: Not defined 00:01:46.982 ==> default: -- Video Type: cirrus 00:01:46.982 ==> default: -- Video VRAM: 9216 00:01:46.982 ==> default: -- Sound Type: 00:01:46.982 ==> default: -- Keymap: en-us 00:01:46.982 ==> default: -- TPM Path: 00:01:46.982 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:46.982 ==> default: -- Command line args: 00:01:46.982 ==> default: -> value=-device, 00:01:46.982 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:46.982 ==> default: -> value=-drive, 00:01:46.982 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:46.982 ==> default: -> value=-device, 00:01:46.982 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:46.982 ==> default: -> value=-device, 00:01:46.982 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:46.982 ==> default: -> value=-drive, 00:01:46.982 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:46.982 ==> default: -> value=-device, 00:01:46.982 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:46.982 ==> default: -> value=-drive, 00:01:46.982 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:46.982 ==> default: -> value=-device, 00:01:46.982 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:46.982 ==> default: -> value=-drive, 00:01:46.982 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:46.982 ==> default: -> value=-device, 00:01:46.982 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.241 ==> default: Creating shared folders metadata... 00:01:47.241 ==> default: Starting domain. 00:01:48.619 ==> default: Waiting for domain to get an IP address... 00:02:03.505 ==> default: Waiting for SSH to become available... 00:02:04.881 ==> default: Configuring and enabling network interfaces... 00:02:09.069 default: SSH address: 192.168.121.38:22 00:02:09.069 default: SSH username: vagrant 00:02:09.069 default: SSH auth method: private key 00:02:10.975 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:17.533 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:22.809 ==> default: Mounting SSHFS shared folder... 00:02:24.711 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:24.711 ==> default: Checking Mount.. 00:02:26.083 ==> default: Folder Successfully Mounted! 00:02:26.083 ==> default: Running provisioner: file... 00:02:26.650 default: ~/.gitconfig => .gitconfig 00:02:27.216 00:02:27.216 SUCCESS! 00:02:27.216 00:02:27.216 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:27.216 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:27.216 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:27.216 00:02:27.226 [Pipeline] } 00:02:27.243 [Pipeline] // stage 00:02:27.251 [Pipeline] dir 00:02:27.252 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:02:27.253 [Pipeline] { 00:02:27.268 [Pipeline] catchError 00:02:27.270 [Pipeline] { 00:02:27.285 [Pipeline] sh 00:02:27.564 + vagrant ssh-config --host vagrant 00:02:27.564 + sed -ne /^Host/,$p 00:02:27.564 + tee ssh_conf 00:02:30.958 Host vagrant 00:02:30.958 HostName 192.168.121.38 00:02:30.958 User vagrant 00:02:30.958 Port 22 00:02:30.958 UserKnownHostsFile /dev/null 00:02:30.958 StrictHostKeyChecking no 00:02:30.958 PasswordAuthentication no 00:02:30.958 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:30.958 IdentitiesOnly yes 00:02:30.958 LogLevel FATAL 00:02:30.958 ForwardAgent yes 00:02:30.958 ForwardX11 yes 00:02:30.958 00:02:30.971 [Pipeline] withEnv 00:02:30.974 [Pipeline] { 00:02:30.989 [Pipeline] sh 00:02:31.268 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:31.268 source /etc/os-release 00:02:31.268 [[ -e /image.version ]] && img=$(< /image.version) 00:02:31.268 # Minimal, systemd-like check. 00:02:31.268 if [[ -e /.dockerenv ]]; then 00:02:31.268 # Clear garbage from the node's name: 00:02:31.268 # agt-er_autotest_547-896 -> autotest_547-896 00:02:31.268 # $HOSTNAME is the actual container id 00:02:31.268 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:31.268 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:31.268 # We can assume this is a mount from a host where container is running, 00:02:31.268 # so fetch its hostname to easily identify the target swarm worker. 00:02:31.268 container="$(< /etc/hostname) ($agent)" 00:02:31.268 else 00:02:31.268 # Fallback 00:02:31.268 container=$agent 00:02:31.268 fi 00:02:31.268 fi 00:02:31.268 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:31.268 00:02:31.279 [Pipeline] } 00:02:31.309 [Pipeline] // withEnv 00:02:31.323 [Pipeline] setCustomBuildProperty 00:02:31.338 [Pipeline] stage 00:02:31.340 [Pipeline] { (Tests) 00:02:31.351 [Pipeline] sh 00:02:31.623 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:31.895 [Pipeline] sh 00:02:32.174 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:32.452 [Pipeline] timeout 00:02:32.453 Timeout set to expire in 30 min 00:02:32.456 [Pipeline] { 00:02:32.478 [Pipeline] sh 00:02:32.764 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:33.331 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:02:33.345 [Pipeline] sh 00:02:33.624 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:33.897 [Pipeline] sh 00:02:34.176 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:34.453 [Pipeline] sh 00:02:34.732 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:34.991 ++ readlink -f spdk_repo 00:02:34.991 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:34.991 + [[ -n /home/vagrant/spdk_repo ]] 00:02:34.991 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:34.991 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:34.991 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:34.991 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:34.991 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:34.991 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:34.991 + cd /home/vagrant/spdk_repo 00:02:34.991 + source /etc/os-release 00:02:34.991 ++ NAME='Fedora Linux' 00:02:34.991 ++ VERSION='38 (Cloud Edition)' 00:02:34.991 ++ ID=fedora 00:02:34.991 ++ VERSION_ID=38 00:02:34.991 ++ VERSION_CODENAME= 00:02:34.991 ++ PLATFORM_ID=platform:f38 00:02:34.991 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:34.991 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:34.991 ++ LOGO=fedora-logo-icon 00:02:34.991 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:34.991 ++ HOME_URL=https://fedoraproject.org/ 00:02:34.991 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:34.991 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:34.991 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:34.991 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:34.991 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:34.991 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:34.991 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:34.991 ++ SUPPORT_END=2024-05-14 00:02:34.991 ++ VARIANT='Cloud Edition' 00:02:34.991 ++ VARIANT_ID=cloud 00:02:34.991 + uname -a 00:02:34.991 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:34.991 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:35.250 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:35.250 Hugepages 00:02:35.250 node hugesize free / total 00:02:35.250 node0 1048576kB 0 / 0 00:02:35.509 node0 2048kB 0 / 0 00:02:35.509 00:02:35.509 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:35.509 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:35.509 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:35.509 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:02:35.509 + rm -f /tmp/spdk-ld-path 00:02:35.509 + source autorun-spdk.conf 00:02:35.509 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.509 ++ SPDK_TEST_NVMF=1 00:02:35.509 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:35.509 ++ SPDK_TEST_URING=1 00:02:35.509 ++ SPDK_TEST_USDT=1 00:02:35.509 ++ SPDK_RUN_UBSAN=1 00:02:35.509 ++ NET_TYPE=virt 00:02:35.509 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:35.509 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:35.509 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.509 ++ RUN_NIGHTLY=1 00:02:35.509 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:35.509 + [[ -n '' ]] 00:02:35.509 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:35.509 + for M in /var/spdk/build-*-manifest.txt 00:02:35.509 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:35.509 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.509 + for M in /var/spdk/build-*-manifest.txt 00:02:35.509 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:35.509 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.509 ++ uname 00:02:35.509 + [[ Linux == \L\i\n\u\x ]] 00:02:35.509 + sudo dmesg -T 00:02:35.509 + sudo dmesg --clear 00:02:35.509 + dmesg_pid=5899 00:02:35.509 + sudo dmesg -Tw 00:02:35.509 + [[ Fedora Linux == FreeBSD ]] 00:02:35.509 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.509 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.509 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:35.509 + [[ -x /usr/src/fio-static/fio ]] 00:02:35.509 + export FIO_BIN=/usr/src/fio-static/fio 00:02:35.509 + FIO_BIN=/usr/src/fio-static/fio 00:02:35.509 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:35.509 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:35.509 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:35.509 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.509 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.509 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:35.509 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.509 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.509 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.509 Test configuration: 00:02:35.509 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.509 SPDK_TEST_NVMF=1 00:02:35.509 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:35.509 SPDK_TEST_URING=1 00:02:35.509 SPDK_TEST_USDT=1 00:02:35.509 SPDK_RUN_UBSAN=1 00:02:35.509 NET_TYPE=virt 00:02:35.509 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:35.509 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:35.509 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.767 RUN_NIGHTLY=1 05:49:27 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:35.767 05:49:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:35.767 05:49:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.767 05:49:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.767 05:49:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.767 05:49:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.767 05:49:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.768 05:49:27 -- paths/export.sh@5 -- $ export PATH 00:02:35.768 05:49:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.768 05:49:27 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:35.768 05:49:27 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:35.768 05:49:27 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720849767.XXXXXX 00:02:35.768 05:49:27 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720849767.dLqYVy 00:02:35.768 05:49:27 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:35.768 05:49:27 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:02:35.768 05:49:27 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:35.768 05:49:27 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:35.768 05:49:27 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:35.768 05:49:27 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:35.768 05:49:27 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:35.768 05:49:27 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:35.768 05:49:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.768 05:49:27 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:35.768 05:49:27 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:35.768 05:49:27 -- pm/common@17 -- $ local monitor 00:02:35.768 05:49:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.768 05:49:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.768 05:49:27 -- pm/common@25 -- $ sleep 1 00:02:35.768 05:49:27 -- pm/common@21 -- $ date +%s 00:02:35.768 05:49:27 -- pm/common@21 -- $ date +%s 00:02:35.768 05:49:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720849767 00:02:35.768 05:49:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720849767 00:02:35.768 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720849767_collect-vmstat.pm.log 00:02:35.768 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720849767_collect-cpu-load.pm.log 00:02:36.704 05:49:28 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:36.704 05:49:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:36.704 05:49:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:36.704 05:49:28 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:36.704 05:49:28 -- spdk/autobuild.sh@16 -- $ date -u 00:02:36.704 Sat Jul 13 05:49:28 AM UTC 2024 00:02:36.704 05:49:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:36.704 v24.09-pre-202-g719d03c6a 00:02:36.704 05:49:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:36.704 05:49:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:36.704 05:49:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:36.704 05:49:28 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:36.704 05:49:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:36.704 05:49:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.704 ************************************ 00:02:36.704 START TEST ubsan 00:02:36.704 ************************************ 00:02:36.704 using ubsan 00:02:36.704 05:49:28 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:36.704 00:02:36.704 real 0m0.000s 00:02:36.704 user 0m0.000s 00:02:36.704 sys 0m0.000s 00:02:36.704 05:49:28 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:36.704 05:49:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.704 ************************************ 00:02:36.704 END TEST ubsan 00:02:36.704 ************************************ 00:02:36.704 05:49:28 -- common/autotest_common.sh@1142 -- $ return 0 00:02:36.704 05:49:28 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:36.704 05:49:28 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:36.704 05:49:28 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:36.704 05:49:28 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:36.704 05:49:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:36.704 05:49:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.704 ************************************ 00:02:36.704 START TEST build_native_dpdk 00:02:36.704 ************************************ 00:02:36.704 05:49:28 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:36.704 caf0f5d395 version: 22.11.4 00:02:36.704 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:36.704 dc9c799c7d vhost: fix missing spinlock unlock 00:02:36.704 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:36.704 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:36.704 05:49:28 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:36.704 patching file config/rte_config.h 00:02:36.704 Hunk #1 succeeded at 60 (offset 1 line). 00:02:36.704 05:49:28 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:36.963 05:49:28 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:36.963 05:49:28 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:36.963 05:49:28 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:36.963 05:49:28 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:42.231 The Meson build system 00:02:42.231 Version: 1.3.1 00:02:42.231 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:42.231 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:42.231 Build type: native build 00:02:42.231 Program cat found: YES (/usr/bin/cat) 00:02:42.231 Project name: DPDK 00:02:42.231 Project version: 22.11.4 00:02:42.231 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:42.231 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:42.231 Host machine cpu family: x86_64 00:02:42.231 Host machine cpu: x86_64 00:02:42.231 Message: ## Building in Developer Mode ## 00:02:42.231 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:42.231 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:42.231 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:42.231 Program objdump found: YES (/usr/bin/objdump) 00:02:42.231 Program python3 found: YES (/usr/bin/python3) 00:02:42.231 Program cat found: YES (/usr/bin/cat) 00:02:42.231 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:42.231 Checking for size of "void *" : 8 00:02:42.231 Checking for size of "void *" : 8 (cached) 00:02:42.231 Library m found: YES 00:02:42.231 Library numa found: YES 00:02:42.231 Has header "numaif.h" : YES 00:02:42.231 Library fdt found: NO 00:02:42.231 Library execinfo found: NO 00:02:42.231 Has header "execinfo.h" : YES 00:02:42.231 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:42.231 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:42.231 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:42.231 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:42.231 Run-time dependency openssl found: YES 3.0.9 00:02:42.231 Run-time dependency libpcap found: YES 1.10.4 00:02:42.231 Has header "pcap.h" with dependency libpcap: YES 00:02:42.231 Compiler for C supports arguments -Wcast-qual: YES 00:02:42.231 Compiler for C supports arguments -Wdeprecated: YES 00:02:42.231 Compiler for C supports arguments -Wformat: YES 00:02:42.231 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:42.231 Compiler for C supports arguments -Wformat-security: NO 00:02:42.231 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.231 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:42.231 Compiler for C supports arguments -Wnested-externs: YES 00:02:42.231 Compiler for C supports arguments -Wold-style-definition: YES 00:02:42.231 Compiler for C supports arguments -Wpointer-arith: YES 00:02:42.231 Compiler for C supports arguments -Wsign-compare: YES 00:02:42.231 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:42.231 Compiler for C supports arguments -Wundef: YES 00:02:42.231 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.231 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:42.231 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:42.231 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.231 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:42.231 Compiler for C supports arguments -mavx512f: YES 00:02:42.231 Checking if "AVX512 checking" compiles: YES 00:02:42.231 Fetching value of define "__SSE4_2__" : 1 00:02:42.231 Fetching value of define "__AES__" : 1 00:02:42.231 Fetching value of define "__AVX__" : 1 00:02:42.231 Fetching value of define "__AVX2__" : 1 00:02:42.231 Fetching value of define "__AVX512BW__" : (undefined) 00:02:42.231 Fetching value of define "__AVX512CD__" : (undefined) 00:02:42.231 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:42.231 Fetching value of define "__AVX512F__" : (undefined) 00:02:42.231 Fetching value of define "__AVX512VL__" : (undefined) 00:02:42.231 Fetching value of define "__PCLMUL__" : 1 00:02:42.231 Fetching value of define "__RDRND__" : 1 00:02:42.231 Fetching value of define "__RDSEED__" : 1 00:02:42.231 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:42.231 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:42.231 Message: lib/kvargs: Defining dependency "kvargs" 00:02:42.231 Message: lib/telemetry: Defining dependency "telemetry" 00:02:42.231 Checking for function "getentropy" : YES 00:02:42.231 Message: lib/eal: Defining dependency "eal" 00:02:42.231 Message: lib/ring: Defining dependency "ring" 00:02:42.231 Message: lib/rcu: Defining dependency "rcu" 00:02:42.231 Message: lib/mempool: Defining dependency "mempool" 00:02:42.231 Message: lib/mbuf: Defining dependency "mbuf" 00:02:42.231 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:42.231 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.231 Compiler for C supports arguments -mpclmul: YES 00:02:42.231 Compiler for C supports arguments -maes: YES 00:02:42.231 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:42.231 Compiler for C supports arguments -mavx512bw: YES 00:02:42.231 Compiler for C supports arguments -mavx512dq: YES 00:02:42.231 Compiler for C supports arguments -mavx512vl: YES 00:02:42.231 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:42.231 Compiler for C supports arguments -mavx2: YES 00:02:42.231 Compiler for C supports arguments -mavx: YES 00:02:42.231 Message: lib/net: Defining dependency "net" 00:02:42.231 Message: lib/meter: Defining dependency "meter" 00:02:42.231 Message: lib/ethdev: Defining dependency "ethdev" 00:02:42.231 Message: lib/pci: Defining dependency "pci" 00:02:42.231 Message: lib/cmdline: Defining dependency "cmdline" 00:02:42.231 Message: lib/metrics: Defining dependency "metrics" 00:02:42.231 Message: lib/hash: Defining dependency "hash" 00:02:42.231 Message: lib/timer: Defining dependency "timer" 00:02:42.231 Fetching value of define "__AVX2__" : 1 (cached) 00:02:42.231 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.231 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:42.231 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:42.231 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:42.231 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:42.231 Message: lib/acl: Defining dependency "acl" 00:02:42.232 Message: lib/bbdev: Defining dependency "bbdev" 00:02:42.232 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:42.232 Run-time dependency libelf found: YES 0.190 00:02:42.232 Message: lib/bpf: Defining dependency "bpf" 00:02:42.232 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:42.232 Message: lib/compressdev: Defining dependency "compressdev" 00:02:42.232 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:42.232 Message: lib/distributor: Defining dependency "distributor" 00:02:42.232 Message: lib/efd: Defining dependency "efd" 00:02:42.232 Message: lib/eventdev: Defining dependency "eventdev" 00:02:42.232 Message: lib/gpudev: Defining dependency "gpudev" 00:02:42.232 Message: lib/gro: Defining dependency "gro" 00:02:42.232 Message: lib/gso: Defining dependency "gso" 00:02:42.232 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:42.232 Message: lib/jobstats: Defining dependency "jobstats" 00:02:42.232 Message: lib/latencystats: Defining dependency "latencystats" 00:02:42.232 Message: lib/lpm: Defining dependency "lpm" 00:02:42.232 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.232 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:42.232 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:42.232 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:42.232 Message: lib/member: Defining dependency "member" 00:02:42.232 Message: lib/pcapng: Defining dependency "pcapng" 00:02:42.232 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:42.232 Message: lib/power: Defining dependency "power" 00:02:42.232 Message: lib/rawdev: Defining dependency "rawdev" 00:02:42.232 Message: lib/regexdev: Defining dependency "regexdev" 00:02:42.232 Message: lib/dmadev: Defining dependency "dmadev" 00:02:42.232 Message: lib/rib: Defining dependency "rib" 00:02:42.232 Message: lib/reorder: Defining dependency "reorder" 00:02:42.232 Message: lib/sched: Defining dependency "sched" 00:02:42.232 Message: lib/security: Defining dependency "security" 00:02:42.232 Message: lib/stack: Defining dependency "stack" 00:02:42.232 Has header "linux/userfaultfd.h" : YES 00:02:42.232 Message: lib/vhost: Defining dependency "vhost" 00:02:42.232 Message: lib/ipsec: Defining dependency "ipsec" 00:02:42.232 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.232 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:42.232 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:42.232 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:42.232 Message: lib/fib: Defining dependency "fib" 00:02:42.232 Message: lib/port: Defining dependency "port" 00:02:42.232 Message: lib/pdump: Defining dependency "pdump" 00:02:42.232 Message: lib/table: Defining dependency "table" 00:02:42.232 Message: lib/pipeline: Defining dependency "pipeline" 00:02:42.232 Message: lib/graph: Defining dependency "graph" 00:02:42.232 Message: lib/node: Defining dependency "node" 00:02:42.232 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:42.232 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:42.232 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:42.232 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:42.232 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:42.232 Compiler for C supports arguments -Wno-unused-value: YES 00:02:42.232 Compiler for C supports arguments -Wno-format: YES 00:02:42.232 Compiler for C supports arguments -Wno-format-security: YES 00:02:42.232 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:43.166 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:43.166 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:43.166 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:43.166 Fetching value of define "__AVX2__" : 1 (cached) 00:02:43.166 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:43.166 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:43.166 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:43.166 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:43.166 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:43.166 Program doxygen found: YES (/usr/bin/doxygen) 00:02:43.166 Configuring doxy-api.conf using configuration 00:02:43.166 Program sphinx-build found: NO 00:02:43.166 Configuring rte_build_config.h using configuration 00:02:43.166 Message: 00:02:43.166 ================= 00:02:43.166 Applications Enabled 00:02:43.166 ================= 00:02:43.166 00:02:43.166 apps: 00:02:43.166 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:43.166 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:43.166 test-security-perf, 00:02:43.166 00:02:43.166 Message: 00:02:43.166 ================= 00:02:43.166 Libraries Enabled 00:02:43.166 ================= 00:02:43.166 00:02:43.166 libs: 00:02:43.166 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:43.166 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:43.166 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:43.166 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:43.166 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:43.166 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:43.166 table, pipeline, graph, node, 00:02:43.166 00:02:43.166 Message: 00:02:43.166 =============== 00:02:43.166 Drivers Enabled 00:02:43.166 =============== 00:02:43.166 00:02:43.166 common: 00:02:43.166 00:02:43.166 bus: 00:02:43.166 pci, vdev, 00:02:43.166 mempool: 00:02:43.166 ring, 00:02:43.166 dma: 00:02:43.166 00:02:43.166 net: 00:02:43.166 i40e, 00:02:43.166 raw: 00:02:43.166 00:02:43.166 crypto: 00:02:43.166 00:02:43.166 compress: 00:02:43.166 00:02:43.166 regex: 00:02:43.166 00:02:43.166 vdpa: 00:02:43.166 00:02:43.166 event: 00:02:43.166 00:02:43.166 baseband: 00:02:43.166 00:02:43.166 gpu: 00:02:43.166 00:02:43.166 00:02:43.166 Message: 00:02:43.166 ================= 00:02:43.166 Content Skipped 00:02:43.166 ================= 00:02:43.166 00:02:43.166 apps: 00:02:43.166 00:02:43.166 libs: 00:02:43.166 kni: explicitly disabled via build config (deprecated lib) 00:02:43.166 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:43.166 00:02:43.166 drivers: 00:02:43.166 common/cpt: not in enabled drivers build config 00:02:43.166 common/dpaax: not in enabled drivers build config 00:02:43.166 common/iavf: not in enabled drivers build config 00:02:43.166 common/idpf: not in enabled drivers build config 00:02:43.166 common/mvep: not in enabled drivers build config 00:02:43.166 common/octeontx: not in enabled drivers build config 00:02:43.166 bus/auxiliary: not in enabled drivers build config 00:02:43.166 bus/dpaa: not in enabled drivers build config 00:02:43.166 bus/fslmc: not in enabled drivers build config 00:02:43.166 bus/ifpga: not in enabled drivers build config 00:02:43.166 bus/vmbus: not in enabled drivers build config 00:02:43.166 common/cnxk: not in enabled drivers build config 00:02:43.166 common/mlx5: not in enabled drivers build config 00:02:43.166 common/qat: not in enabled drivers build config 00:02:43.166 common/sfc_efx: not in enabled drivers build config 00:02:43.166 mempool/bucket: not in enabled drivers build config 00:02:43.166 mempool/cnxk: not in enabled drivers build config 00:02:43.166 mempool/dpaa: not in enabled drivers build config 00:02:43.166 mempool/dpaa2: not in enabled drivers build config 00:02:43.166 mempool/octeontx: not in enabled drivers build config 00:02:43.166 mempool/stack: not in enabled drivers build config 00:02:43.166 dma/cnxk: not in enabled drivers build config 00:02:43.166 dma/dpaa: not in enabled drivers build config 00:02:43.166 dma/dpaa2: not in enabled drivers build config 00:02:43.166 dma/hisilicon: not in enabled drivers build config 00:02:43.166 dma/idxd: not in enabled drivers build config 00:02:43.166 dma/ioat: not in enabled drivers build config 00:02:43.166 dma/skeleton: not in enabled drivers build config 00:02:43.166 net/af_packet: not in enabled drivers build config 00:02:43.166 net/af_xdp: not in enabled drivers build config 00:02:43.166 net/ark: not in enabled drivers build config 00:02:43.166 net/atlantic: not in enabled drivers build config 00:02:43.166 net/avp: not in enabled drivers build config 00:02:43.167 net/axgbe: not in enabled drivers build config 00:02:43.167 net/bnx2x: not in enabled drivers build config 00:02:43.167 net/bnxt: not in enabled drivers build config 00:02:43.167 net/bonding: not in enabled drivers build config 00:02:43.167 net/cnxk: not in enabled drivers build config 00:02:43.167 net/cxgbe: not in enabled drivers build config 00:02:43.167 net/dpaa: not in enabled drivers build config 00:02:43.167 net/dpaa2: not in enabled drivers build config 00:02:43.167 net/e1000: not in enabled drivers build config 00:02:43.167 net/ena: not in enabled drivers build config 00:02:43.167 net/enetc: not in enabled drivers build config 00:02:43.167 net/enetfec: not in enabled drivers build config 00:02:43.167 net/enic: not in enabled drivers build config 00:02:43.167 net/failsafe: not in enabled drivers build config 00:02:43.167 net/fm10k: not in enabled drivers build config 00:02:43.167 net/gve: not in enabled drivers build config 00:02:43.167 net/hinic: not in enabled drivers build config 00:02:43.167 net/hns3: not in enabled drivers build config 00:02:43.167 net/iavf: not in enabled drivers build config 00:02:43.167 net/ice: not in enabled drivers build config 00:02:43.167 net/idpf: not in enabled drivers build config 00:02:43.167 net/igc: not in enabled drivers build config 00:02:43.167 net/ionic: not in enabled drivers build config 00:02:43.167 net/ipn3ke: not in enabled drivers build config 00:02:43.167 net/ixgbe: not in enabled drivers build config 00:02:43.167 net/kni: not in enabled drivers build config 00:02:43.167 net/liquidio: not in enabled drivers build config 00:02:43.167 net/mana: not in enabled drivers build config 00:02:43.167 net/memif: not in enabled drivers build config 00:02:43.167 net/mlx4: not in enabled drivers build config 00:02:43.167 net/mlx5: not in enabled drivers build config 00:02:43.167 net/mvneta: not in enabled drivers build config 00:02:43.167 net/mvpp2: not in enabled drivers build config 00:02:43.167 net/netvsc: not in enabled drivers build config 00:02:43.167 net/nfb: not in enabled drivers build config 00:02:43.167 net/nfp: not in enabled drivers build config 00:02:43.167 net/ngbe: not in enabled drivers build config 00:02:43.167 net/null: not in enabled drivers build config 00:02:43.167 net/octeontx: not in enabled drivers build config 00:02:43.167 net/octeon_ep: not in enabled drivers build config 00:02:43.167 net/pcap: not in enabled drivers build config 00:02:43.167 net/pfe: not in enabled drivers build config 00:02:43.167 net/qede: not in enabled drivers build config 00:02:43.167 net/ring: not in enabled drivers build config 00:02:43.167 net/sfc: not in enabled drivers build config 00:02:43.167 net/softnic: not in enabled drivers build config 00:02:43.167 net/tap: not in enabled drivers build config 00:02:43.167 net/thunderx: not in enabled drivers build config 00:02:43.167 net/txgbe: not in enabled drivers build config 00:02:43.167 net/vdev_netvsc: not in enabled drivers build config 00:02:43.167 net/vhost: not in enabled drivers build config 00:02:43.167 net/virtio: not in enabled drivers build config 00:02:43.167 net/vmxnet3: not in enabled drivers build config 00:02:43.167 raw/cnxk_bphy: not in enabled drivers build config 00:02:43.167 raw/cnxk_gpio: not in enabled drivers build config 00:02:43.167 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:43.167 raw/ifpga: not in enabled drivers build config 00:02:43.167 raw/ntb: not in enabled drivers build config 00:02:43.167 raw/skeleton: not in enabled drivers build config 00:02:43.167 crypto/armv8: not in enabled drivers build config 00:02:43.167 crypto/bcmfs: not in enabled drivers build config 00:02:43.167 crypto/caam_jr: not in enabled drivers build config 00:02:43.167 crypto/ccp: not in enabled drivers build config 00:02:43.167 crypto/cnxk: not in enabled drivers build config 00:02:43.167 crypto/dpaa_sec: not in enabled drivers build config 00:02:43.167 crypto/dpaa2_sec: not in enabled drivers build config 00:02:43.167 crypto/ipsec_mb: not in enabled drivers build config 00:02:43.167 crypto/mlx5: not in enabled drivers build config 00:02:43.167 crypto/mvsam: not in enabled drivers build config 00:02:43.167 crypto/nitrox: not in enabled drivers build config 00:02:43.167 crypto/null: not in enabled drivers build config 00:02:43.167 crypto/octeontx: not in enabled drivers build config 00:02:43.167 crypto/openssl: not in enabled drivers build config 00:02:43.167 crypto/scheduler: not in enabled drivers build config 00:02:43.167 crypto/uadk: not in enabled drivers build config 00:02:43.167 crypto/virtio: not in enabled drivers build config 00:02:43.167 compress/isal: not in enabled drivers build config 00:02:43.167 compress/mlx5: not in enabled drivers build config 00:02:43.167 compress/octeontx: not in enabled drivers build config 00:02:43.167 compress/zlib: not in enabled drivers build config 00:02:43.167 regex/mlx5: not in enabled drivers build config 00:02:43.167 regex/cn9k: not in enabled drivers build config 00:02:43.167 vdpa/ifc: not in enabled drivers build config 00:02:43.167 vdpa/mlx5: not in enabled drivers build config 00:02:43.167 vdpa/sfc: not in enabled drivers build config 00:02:43.167 event/cnxk: not in enabled drivers build config 00:02:43.167 event/dlb2: not in enabled drivers build config 00:02:43.167 event/dpaa: not in enabled drivers build config 00:02:43.167 event/dpaa2: not in enabled drivers build config 00:02:43.167 event/dsw: not in enabled drivers build config 00:02:43.167 event/opdl: not in enabled drivers build config 00:02:43.167 event/skeleton: not in enabled drivers build config 00:02:43.167 event/sw: not in enabled drivers build config 00:02:43.167 event/octeontx: not in enabled drivers build config 00:02:43.167 baseband/acc: not in enabled drivers build config 00:02:43.167 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:43.167 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:43.167 baseband/la12xx: not in enabled drivers build config 00:02:43.167 baseband/null: not in enabled drivers build config 00:02:43.167 baseband/turbo_sw: not in enabled drivers build config 00:02:43.167 gpu/cuda: not in enabled drivers build config 00:02:43.167 00:02:43.167 00:02:43.167 Build targets in project: 314 00:02:43.167 00:02:43.167 DPDK 22.11.4 00:02:43.167 00:02:43.167 User defined options 00:02:43.167 libdir : lib 00:02:43.167 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:43.167 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:43.167 c_link_args : 00:02:43.167 enable_docs : false 00:02:43.167 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:43.167 enable_kmods : false 00:02:43.167 machine : native 00:02:43.167 tests : false 00:02:43.167 00:02:43.167 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.167 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:43.167 05:49:34 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:43.167 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:43.167 [1/743] Generating lib/rte_kvargs_def with a custom command 00:02:43.167 [2/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:43.167 [3/743] Generating lib/rte_telemetry_def with a custom command 00:02:43.167 [4/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:43.425 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:43.425 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:43.425 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:43.425 [8/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:43.425 [9/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:43.425 [10/743] Linking static target lib/librte_kvargs.a 00:02:43.425 [11/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:43.425 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:43.425 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:43.425 [14/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:43.425 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:43.425 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:43.425 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:43.425 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:43.683 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:43.683 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.683 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:43.683 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:43.683 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:43.683 [24/743] Linking target lib/librte_kvargs.so.23.0 00:02:43.683 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:43.683 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:43.683 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:43.683 [28/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:43.683 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:43.683 [30/743] Linking static target lib/librte_telemetry.a 00:02:43.683 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:43.941 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:43.941 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:43.941 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:43.941 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:43.941 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:43.941 [37/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:43.941 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:43.941 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:43.941 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:43.941 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.198 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:44.198 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.198 [44/743] Linking target lib/librte_telemetry.so.23.0 00:02:44.198 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:44.198 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:44.198 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:44.198 [48/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:44.456 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:44.456 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.456 [51/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:44.456 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:44.456 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:44.456 [54/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:44.456 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:44.456 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:44.456 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:44.456 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:44.456 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:44.456 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:44.456 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:44.456 [62/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:44.456 [63/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:44.714 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:44.714 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:44.714 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:44.714 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:44.714 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:44.714 [69/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:44.714 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:44.714 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:44.714 [72/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:44.714 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:44.714 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:44.714 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:44.714 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:44.971 [77/743] Generating lib/rte_eal_mingw with a custom command 00:02:44.971 [78/743] Generating lib/rte_eal_def with a custom command 00:02:44.971 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:44.971 [80/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.971 [81/743] Generating lib/rte_ring_def with a custom command 00:02:44.971 [82/743] Generating lib/rte_ring_mingw with a custom command 00:02:44.971 [83/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:44.971 [84/743] Generating lib/rte_rcu_def with a custom command 00:02:44.971 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:02:44.971 [86/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:44.971 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:44.971 [88/743] Linking static target lib/librte_ring.a 00:02:44.971 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:44.971 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:45.227 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:45.227 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:45.227 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:45.227 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.484 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:45.484 [96/743] Linking static target lib/librte_eal.a 00:02:45.484 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:45.484 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:45.742 [99/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:45.742 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:45.742 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:45.742 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:45.742 [103/743] Linking static target lib/librte_rcu.a 00:02:45.742 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:45.742 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:46.000 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:46.000 [107/743] Linking static target lib/librte_mempool.a 00:02:46.000 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:46.258 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.258 [110/743] Generating lib/rte_net_def with a custom command 00:02:46.258 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:46.259 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:46.259 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:46.259 [114/743] Generating lib/rte_meter_def with a custom command 00:02:46.259 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:46.259 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:46.259 [117/743] Linking static target lib/librte_meter.a 00:02:46.259 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:46.516 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:46.516 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:46.516 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.516 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:46.780 [123/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:46.780 [124/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:46.780 [125/743] Linking static target lib/librte_net.a 00:02:46.780 [126/743] Linking static target lib/librte_mbuf.a 00:02:46.780 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.055 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.055 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:47.055 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:47.055 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:47.055 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:47.329 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.329 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:47.622 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:47.880 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:47.880 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:47.880 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:47.880 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:47.880 [140/743] Generating lib/rte_pci_def with a custom command 00:02:47.880 [141/743] Generating lib/rte_pci_mingw with a custom command 00:02:47.880 [142/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:47.880 [143/743] Linking static target lib/librte_pci.a 00:02:47.880 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:47.880 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:47.880 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:48.138 [147/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:48.138 [148/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:48.138 [149/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.138 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:48.138 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:48.138 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:48.138 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:48.139 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:48.139 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:48.397 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:48.397 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:48.397 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:48.397 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:48.397 [160/743] Generating lib/rte_metrics_def with a custom command 00:02:48.397 [161/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:48.397 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:48.397 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:48.397 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:48.397 [165/743] Generating lib/rte_hash_def with a custom command 00:02:48.397 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:48.397 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:48.656 [168/743] Generating lib/rte_timer_def with a custom command 00:02:48.656 [169/743] Generating lib/rte_timer_mingw with a custom command 00:02:48.656 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:48.656 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:48.656 [172/743] Linking static target lib/librte_cmdline.a 00:02:48.656 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:48.914 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:48.914 [175/743] Linking static target lib/librte_metrics.a 00:02:48.914 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:48.914 [177/743] Linking static target lib/librte_timer.a 00:02:49.173 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.432 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.432 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:49.432 [181/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.432 [182/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:49.432 [183/743] Linking static target lib/librte_ethdev.a 00:02:49.432 [184/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:49.999 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:49.999 [186/743] Generating lib/rte_acl_def with a custom command 00:02:49.999 [187/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:49.999 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:49.999 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:50.257 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:50.257 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:50.257 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:50.257 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:50.516 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:50.775 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:50.775 [196/743] Linking static target lib/librte_bitratestats.a 00:02:50.775 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:51.033 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.033 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:51.033 [200/743] Linking static target lib/librte_bbdev.a 00:02:51.033 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:51.291 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:51.291 [203/743] Linking static target lib/librte_hash.a 00:02:51.549 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:51.549 [205/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:51.549 [206/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:51.549 [207/743] Linking static target lib/acl/libavx512_tmp.a 00:02:51.549 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:51.807 [209/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.065 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.065 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:52.065 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:02:52.065 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:52.065 [214/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:52.065 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:52.065 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:52.323 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:52.323 [218/743] Linking static target lib/librte_acl.a 00:02:52.323 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:52.323 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:52.323 [221/743] Linking static target lib/librte_cfgfile.a 00:02:52.582 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:52.582 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:52.582 [224/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.582 [225/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:52.582 [226/743] Linking target lib/librte_eal.so.23.0 00:02:52.582 [227/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.582 [228/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.582 [229/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:52.840 [230/743] Linking target lib/librte_ring.so.23.0 00:02:52.840 [231/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:52.840 [232/743] Linking target lib/librte_meter.so.23.0 00:02:52.840 [233/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:52.840 [234/743] Linking target lib/librte_pci.so.23.0 00:02:52.840 [235/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:52.840 [236/743] Linking target lib/librte_rcu.so.23.0 00:02:52.840 [237/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:52.840 [238/743] Linking target lib/librte_mempool.so.23.0 00:02:53.099 [239/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:53.099 [240/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:53.099 [241/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:53.099 [242/743] Linking static target lib/librte_bpf.a 00:02:53.099 [243/743] Linking target lib/librte_timer.so.23.0 00:02:53.099 [244/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:53.099 [245/743] Linking target lib/librte_acl.so.23.0 00:02:53.099 [246/743] Linking target lib/librte_cfgfile.so.23.0 00:02:53.099 [247/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:53.099 [248/743] Generating lib/rte_cryptodev_def with a custom command 00:02:53.099 [249/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:53.099 [250/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:53.099 [251/743] Linking static target lib/librte_compressdev.a 00:02:53.099 [252/743] Linking target lib/librte_mbuf.so.23.0 00:02:53.099 [253/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:53.100 [254/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:53.100 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:53.100 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:53.358 [257/743] Generating lib/rte_distributor_mingw with a custom command 00:02:53.358 [258/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:53.358 [259/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.358 [260/743] Linking target lib/librte_net.so.23.0 00:02:53.358 [261/743] Linking target lib/librte_bbdev.so.23.0 00:02:53.358 [262/743] Generating lib/rte_efd_def with a custom command 00:02:53.358 [263/743] Generating lib/rte_efd_mingw with a custom command 00:02:53.358 [264/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:53.358 [265/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:53.358 [266/743] Linking target lib/librte_cmdline.so.23.0 00:02:53.616 [267/743] Linking target lib/librte_hash.so.23.0 00:02:53.616 [268/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:53.875 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:53.875 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:53.875 [271/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.134 [272/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:54.134 [273/743] Linking target lib/librte_compressdev.so.23.0 00:02:54.134 [274/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.134 [275/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:54.134 [276/743] Linking target lib/librte_ethdev.so.23.0 00:02:54.134 [277/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:54.134 [278/743] Linking static target lib/librte_distributor.a 00:02:54.392 [279/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:54.392 [280/743] Linking target lib/librte_metrics.so.23.0 00:02:54.392 [281/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.392 [282/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:54.392 [283/743] Linking target lib/librte_bpf.so.23.0 00:02:54.392 [284/743] Linking target lib/librte_bitratestats.so.23.0 00:02:54.650 [285/743] Linking target lib/librte_distributor.so.23.0 00:02:54.650 [286/743] Generating lib/rte_eventdev_def with a custom command 00:02:54.650 [287/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:54.650 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:54.650 [289/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:54.650 [290/743] Generating lib/rte_gpudev_def with a custom command 00:02:54.650 [291/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:54.909 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:54.909 [293/743] Linking static target lib/librte_efd.a 00:02:54.909 [294/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.909 [295/743] Linking target lib/librte_efd.so.23.0 00:02:55.167 [296/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:55.167 [297/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:55.167 [298/743] Linking static target lib/librte_cryptodev.a 00:02:55.425 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:55.425 [300/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:55.425 [301/743] Generating lib/rte_gro_def with a custom command 00:02:55.425 [302/743] Generating lib/rte_gro_mingw with a custom command 00:02:55.425 [303/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:55.425 [304/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:55.425 [305/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:55.425 [306/743] Linking static target lib/librte_gpudev.a 00:02:55.683 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:55.941 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:55.941 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:55.941 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:56.200 [311/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:56.200 [312/743] Generating lib/rte_gso_def with a custom command 00:02:56.200 [313/743] Generating lib/rte_gso_mingw with a custom command 00:02:56.200 [314/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:56.200 [315/743] Linking static target lib/librte_gro.a 00:02:56.200 [316/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:56.200 [317/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.458 [318/743] Linking target lib/librte_gpudev.so.23.0 00:02:56.458 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:56.458 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:56.458 [321/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.458 [322/743] Linking target lib/librte_gro.so.23.0 00:02:56.458 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:56.458 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:56.717 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:56.717 [326/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:56.717 [327/743] Linking static target lib/librte_gso.a 00:02:56.717 [328/743] Linking static target lib/librte_eventdev.a 00:02:56.717 [329/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:56.717 [330/743] Linking static target lib/librte_jobstats.a 00:02:56.717 [331/743] Generating lib/rte_jobstats_def with a custom command 00:02:56.717 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:56.717 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.975 [334/743] Linking target lib/librte_gso.so.23.0 00:02:56.975 [335/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:56.975 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:56.975 [337/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:56.975 [338/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:56.975 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:56.975 [340/743] Generating lib/rte_lpm_def with a custom command 00:02:56.975 [341/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:56.975 [342/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.975 [343/743] Generating lib/rte_lpm_mingw with a custom command 00:02:57.233 [344/743] Linking target lib/librte_jobstats.so.23.0 00:02:57.233 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:57.233 [346/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.233 [347/743] Linking target lib/librte_cryptodev.so.23.0 00:02:57.233 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:57.233 [349/743] Linking static target lib/librte_ip_frag.a 00:02:57.492 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:57.750 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.750 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:57.750 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:57.750 [354/743] Linking static target lib/librte_latencystats.a 00:02:57.750 [355/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:57.750 [356/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:57.750 [357/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:57.750 [358/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:57.750 [359/743] Generating lib/rte_member_def with a custom command 00:02:57.750 [360/743] Generating lib/rte_member_mingw with a custom command 00:02:57.750 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:57.750 [362/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:57.750 [363/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:58.008 [364/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:58.008 [365/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.008 [366/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:58.008 [367/743] Linking target lib/librte_latencystats.so.23.0 00:02:58.008 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:58.267 [369/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:58.267 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:58.267 [371/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:58.267 [372/743] Linking static target lib/librte_lpm.a 00:02:58.525 [373/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:58.525 [374/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:58.525 [375/743] Generating lib/rte_power_def with a custom command 00:02:58.525 [376/743] Generating lib/rte_power_mingw with a custom command 00:02:58.525 [377/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.783 [378/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:58.783 [379/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.783 [380/743] Linking target lib/librte_eventdev.so.23.0 00:02:58.783 [381/743] Generating lib/rte_rawdev_def with a custom command 00:02:58.783 [382/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:58.783 [383/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:58.783 [384/743] Linking target lib/librte_lpm.so.23.0 00:02:58.783 [385/743] Generating lib/rte_regexdev_def with a custom command 00:02:58.783 [386/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:58.783 [387/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:58.783 [388/743] Linking static target lib/librte_pcapng.a 00:02:58.783 [389/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:58.783 [390/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:58.783 [391/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:58.783 [392/743] Generating lib/rte_dmadev_def with a custom command 00:02:58.783 [393/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:58.783 [394/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:58.783 [395/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:58.783 [396/743] Generating lib/rte_rib_def with a custom command 00:02:58.783 [397/743] Linking static target lib/librte_rawdev.a 00:02:59.042 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:59.042 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:59.042 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:59.042 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.042 [402/743] Linking target lib/librte_pcapng.so.23.0 00:02:59.300 [403/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:59.300 [404/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:59.300 [405/743] Linking static target lib/librte_power.a 00:02:59.300 [406/743] Linking static target lib/librte_dmadev.a 00:02:59.300 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:59.300 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.300 [409/743] Linking target lib/librte_rawdev.so.23.0 00:02:59.300 [410/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:59.300 [411/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:59.558 [412/743] Linking static target lib/librte_member.a 00:02:59.558 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:59.558 [414/743] Generating lib/rte_sched_def with a custom command 00:02:59.558 [415/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:59.558 [416/743] Linking static target lib/librte_regexdev.a 00:02:59.558 [417/743] Generating lib/rte_sched_mingw with a custom command 00:02:59.558 [418/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:59.558 [419/743] Generating lib/rte_security_def with a custom command 00:02:59.558 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:59.816 [421/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:59.816 [422/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:59.816 [423/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:59.816 [424/743] Linking static target lib/librte_reorder.a 00:02:59.816 [425/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.816 [426/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.816 [427/743] Linking target lib/librte_dmadev.so.23.0 00:02:59.816 [428/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:59.816 [429/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:59.816 [430/743] Linking static target lib/librte_stack.a 00:02:59.816 [431/743] Generating lib/rte_stack_def with a custom command 00:02:59.816 [432/743] Linking target lib/librte_member.so.23.0 00:02:59.816 [433/743] Generating lib/rte_stack_mingw with a custom command 00:02:59.816 [434/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:00.074 [435/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.074 [436/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:00.074 [437/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.074 [438/743] Linking target lib/librte_reorder.so.23.0 00:03:00.074 [439/743] Linking target lib/librte_stack.so.23.0 00:03:00.074 [440/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:00.074 [441/743] Linking static target lib/librte_rib.a 00:03:00.074 [442/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.074 [443/743] Linking target lib/librte_power.so.23.0 00:03:00.332 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.332 [445/743] Linking target lib/librte_regexdev.so.23.0 00:03:00.332 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:00.332 [447/743] Linking static target lib/librte_security.a 00:03:00.590 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.590 [449/743] Linking target lib/librte_rib.so.23.0 00:03:00.590 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:00.590 [451/743] Generating lib/rte_vhost_def with a custom command 00:03:00.590 [452/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:00.590 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:03:00.848 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:00.848 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.848 [456/743] Linking target lib/librte_security.so.23.0 00:03:00.848 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:01.106 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:01.106 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:01.106 [460/743] Linking static target lib/librte_sched.a 00:03:01.365 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.365 [462/743] Linking target lib/librte_sched.so.23.0 00:03:01.365 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:01.622 [464/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:01.622 [465/743] Generating lib/rte_ipsec_def with a custom command 00:03:01.622 [466/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:01.622 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:03:01.622 [468/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:01.878 [469/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:01.878 [470/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:01.878 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:02.134 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:02.134 [473/743] Generating lib/rte_fib_def with a custom command 00:03:02.134 [474/743] Generating lib/rte_fib_mingw with a custom command 00:03:02.134 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:02.134 [476/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:02.134 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:02.134 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:02.391 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:02.391 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:02.648 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:02.648 [482/743] Linking static target lib/librte_ipsec.a 00:03:02.907 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.907 [484/743] Linking target lib/librte_ipsec.so.23.0 00:03:02.907 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:03.165 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:03.165 [487/743] Linking static target lib/librte_fib.a 00:03:03.165 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:03.165 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:03.165 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:03.424 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:03.424 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.424 [493/743] Linking target lib/librte_fib.so.23.0 00:03:03.704 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:03.982 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:03.982 [496/743] Generating lib/rte_port_def with a custom command 00:03:04.247 [497/743] Generating lib/rte_port_mingw with a custom command 00:03:04.247 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:04.247 [499/743] Generating lib/rte_pdump_def with a custom command 00:03:04.247 [500/743] Generating lib/rte_pdump_mingw with a custom command 00:03:04.247 [501/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:04.247 [502/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:04.247 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:04.505 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:04.505 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:04.505 [506/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:04.505 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:04.505 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:04.763 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:04.763 [510/743] Linking static target lib/librte_port.a 00:03:05.021 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:05.021 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:05.279 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.279 [514/743] Linking target lib/librte_port.so.23.0 00:03:05.279 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:05.279 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:05.279 [517/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:05.538 [518/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:05.538 [519/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:05.538 [520/743] Linking static target lib/librte_pdump.a 00:03:05.796 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.796 [522/743] Linking target lib/librte_pdump.so.23.0 00:03:05.796 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:05.796 [524/743] Generating lib/rte_table_def with a custom command 00:03:05.796 [525/743] Generating lib/rte_table_mingw with a custom command 00:03:06.057 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:06.057 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:06.057 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:06.315 [529/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:06.315 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:06.315 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:06.315 [532/743] Generating lib/rte_pipeline_def with a custom command 00:03:06.315 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:03:06.573 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:06.573 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:06.573 [536/743] Linking static target lib/librte_table.a 00:03:06.573 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:07.139 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:07.139 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:07.139 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.398 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:07.398 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:07.398 [543/743] Linking target lib/librte_table.so.23.0 00:03:07.398 [544/743] Generating lib/rte_graph_def with a custom command 00:03:07.398 [545/743] Generating lib/rte_graph_mingw with a custom command 00:03:07.398 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:07.656 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:07.656 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:07.914 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:07.914 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:07.914 [551/743] Linking static target lib/librte_graph.a 00:03:07.914 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:08.172 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:08.172 [554/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:08.430 [555/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:08.688 [556/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:08.688 [557/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:08.688 [558/743] Generating lib/rte_node_def with a custom command 00:03:08.688 [559/743] Generating lib/rte_node_mingw with a custom command 00:03:08.688 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:08.688 [561/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.688 [562/743] Linking target lib/librte_graph.so.23.0 00:03:08.947 [563/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:08.947 [564/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:08.947 [565/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:08.947 [566/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:08.947 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:03:08.947 [568/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:08.947 [569/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:08.947 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:09.205 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:03:09.205 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:09.205 [573/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:09.205 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:03:09.205 [575/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:09.205 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:09.205 [577/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:09.205 [578/743] Linking static target lib/librte_node.a 00:03:09.205 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:09.205 [580/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:09.205 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:09.464 [582/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:09.464 [583/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.464 [584/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:09.464 [585/743] Linking static target drivers/librte_bus_vdev.a 00:03:09.464 [586/743] Linking target lib/librte_node.so.23.0 00:03:09.464 [587/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:09.722 [588/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:09.722 [589/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:09.722 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.722 [591/743] Linking target drivers/librte_bus_vdev.so.23.0 00:03:09.722 [592/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:09.722 [593/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.980 [594/743] Linking static target drivers/librte_bus_pci.a 00:03:09.980 [595/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:09.980 [596/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:10.238 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:10.238 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:10.238 [599/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.238 [600/743] Linking target drivers/librte_bus_pci.so.23.0 00:03:10.238 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:10.238 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:10.496 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:10.496 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:10.496 [605/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:10.496 [606/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:10.496 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.496 [608/743] Linking static target drivers/librte_mempool_ring.a 00:03:10.496 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.755 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:11.013 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:11.578 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:11.578 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:11.578 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:11.835 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:12.091 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:12.091 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:12.656 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:12.656 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:12.656 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:12.915 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:12.915 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:12.915 [623/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:12.915 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:13.172 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:14.108 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:14.366 [627/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:14.366 [628/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:14.366 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:14.366 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:14.624 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:14.624 [632/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:14.624 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:14.624 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:14.883 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:14.883 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:15.450 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:15.450 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:15.450 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:15.708 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:15.708 [641/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:15.708 [642/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:15.708 [643/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:15.968 [644/743] Linking static target drivers/librte_net_i40e.a 00:03:15.968 [645/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:15.968 [646/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:15.968 [647/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:15.968 [648/743] Linking static target lib/librte_vhost.a 00:03:15.968 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:16.226 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:16.485 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:16.485 [652/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.485 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:16.485 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:16.743 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:16.743 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:17.001 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:17.260 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.260 [659/743] Linking target lib/librte_vhost.so.23.0 00:03:17.260 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:17.519 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:17.519 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:17.519 [663/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:17.519 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:17.519 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:17.778 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:17.778 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:17.778 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:18.036 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:18.036 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:18.294 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:18.294 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:18.552 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:19.118 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:19.118 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:19.377 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:19.377 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:19.377 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:19.635 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:19.635 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:19.635 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:19.894 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:19.894 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:20.152 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:20.152 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:20.411 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:20.411 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:20.411 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:20.670 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:20.670 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:20.670 [691/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:20.670 [692/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:20.670 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:20.928 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:21.187 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:21.444 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:21.444 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:21.701 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:21.701 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:22.267 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:22.267 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:22.267 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:22.267 [703/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:22.524 [704/743] Linking static target lib/librte_pipeline.a 00:03:22.524 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:22.525 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:22.782 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:23.040 [708/743] Linking target app/dpdk-dumpcap 00:03:23.040 [709/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:23.040 [710/743] Linking target app/dpdk-proc-info 00:03:23.040 [711/743] Linking target app/dpdk-pdump 00:03:23.306 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:23.306 [713/743] Linking target app/dpdk-test-acl 00:03:23.579 [714/743] Linking target app/dpdk-test-bbdev 00:03:23.580 [715/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:23.580 [716/743] Linking target app/dpdk-test-compress-perf 00:03:23.580 [717/743] Linking target app/dpdk-test-cmdline 00:03:23.838 [718/743] Linking target app/dpdk-test-crypto-perf 00:03:23.838 [719/743] Linking target app/dpdk-test-eventdev 00:03:23.838 [720/743] Linking target app/dpdk-test-fib 00:03:24.095 [721/743] Linking target app/dpdk-test-flow-perf 00:03:24.095 [722/743] Linking target app/dpdk-test-gpudev 00:03:24.095 [723/743] Linking target app/dpdk-test-pipeline 00:03:24.095 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:24.353 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:24.611 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:24.611 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:24.869 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:24.869 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:24.869 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:25.126 [731/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:25.126 [732/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.126 [733/743] Linking target lib/librte_pipeline.so.23.0 00:03:25.384 [734/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:25.384 [735/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:25.384 [736/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:25.641 [737/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:25.899 [738/743] Linking target app/dpdk-test-regex 00:03:25.899 [739/743] Linking target app/dpdk-test-sad 00:03:26.156 [740/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:26.156 [741/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:26.721 [742/743] Linking target app/dpdk-test-security-perf 00:03:26.721 [743/743] Linking target app/dpdk-testpmd 00:03:26.721 05:50:18 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:03:26.721 05:50:18 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:26.721 05:50:18 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:26.721 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:26.721 [0/1] Installing files. 00:03:26.982 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.982 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.983 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:26.984 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:26.985 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.244 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.245 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.245 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.245 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.245 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.245 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.245 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.245 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.245 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:27.245 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:27.245 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:27.245 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:27.245 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.245 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.245 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.245 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.245 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.245 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.245 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.245 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.245 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.245 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.245 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.245 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.245 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.245 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.515 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.515 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.515 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.515 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.515 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.515 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.515 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.515 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.516 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.517 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:27.518 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:27.518 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:27.518 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:27.518 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:27.518 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:27.518 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:27.518 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:27.518 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:27.518 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:27.518 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:27.518 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:27.518 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:27.518 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:27.518 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:27.518 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:27.518 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:27.518 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:27.518 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:27.518 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:27.518 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:27.518 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:27.518 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:27.518 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:27.518 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:27.518 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:27.518 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:27.518 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:27.518 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:27.518 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:27.518 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:27.518 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:27.518 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:27.518 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:27.518 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:27.518 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:27.518 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:27.518 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:27.518 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:27.518 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:27.518 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:27.518 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:27.518 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:27.518 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:27.518 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:27.518 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:27.518 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:27.518 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:27.518 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:27.518 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:27.518 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:27.518 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:27.518 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:27.518 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:27.518 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:27.518 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:27.518 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:27.518 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:27.518 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:27.518 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:27.518 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:27.518 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:27.518 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:27.518 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:27.518 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:27.518 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:27.518 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:27.518 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:27.518 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:27.518 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:27.518 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:27.518 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:27.518 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:27.518 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:27.518 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:27.518 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:27.518 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:27.518 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:27.518 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:27.519 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:27.519 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:27.519 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:27.519 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:27.519 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:27.519 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:27.519 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:27.519 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:27.519 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:27.519 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:27.519 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:27.519 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:27.519 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:27.519 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:27.519 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:27.519 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:27.519 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:27.519 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:27.519 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:27.519 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:27.519 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:27.519 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:27.519 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:27.519 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:27.519 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:27.519 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:27.519 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:27.519 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:27.519 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:27.519 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:27.519 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:27.519 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:27.519 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:27.519 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:27.519 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:27.519 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:27.519 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:27.519 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:27.519 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:27.519 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:27.519 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:27.519 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:27.519 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:27.519 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:27.519 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:27.519 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:27.519 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:27.519 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:27.519 05:50:19 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:03:27.519 05:50:19 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:27.519 00:03:27.519 real 0m50.774s 00:03:27.519 user 6m6.182s 00:03:27.519 sys 0m53.695s 00:03:27.519 05:50:19 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:27.519 05:50:19 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:27.519 ************************************ 00:03:27.519 END TEST build_native_dpdk 00:03:27.519 ************************************ 00:03:27.777 05:50:19 -- common/autotest_common.sh@1142 -- $ return 0 00:03:27.777 05:50:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:27.777 05:50:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:27.777 05:50:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:27.777 05:50:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:27.777 05:50:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:27.777 05:50:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:27.777 05:50:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:27.777 05:50:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:27.777 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:27.777 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.777 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:27.777 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:28.342 Using 'verbs' RDMA provider 00:03:41.928 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:54.127 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:54.694 Creating mk/config.mk...done. 00:03:54.694 Creating mk/cc.flags.mk...done. 00:03:54.694 Type 'make' to build. 00:03:54.694 05:50:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:54.694 05:50:46 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:54.694 05:50:46 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:54.694 05:50:46 -- common/autotest_common.sh@10 -- $ set +x 00:03:54.694 ************************************ 00:03:54.694 START TEST make 00:03:54.694 ************************************ 00:03:54.694 05:50:46 make -- common/autotest_common.sh@1123 -- $ make -j10 00:03:54.952 make[1]: Nothing to be done for 'all'. 00:04:21.484 CC lib/log/log.o 00:04:21.484 CC lib/ut/ut.o 00:04:21.484 CC lib/log/log_flags.o 00:04:21.484 CC lib/ut_mock/mock.o 00:04:21.484 CC lib/log/log_deprecated.o 00:04:21.484 LIB libspdk_log.a 00:04:21.484 LIB libspdk_ut_mock.a 00:04:21.484 LIB libspdk_ut.a 00:04:21.484 SO libspdk_ut.so.2.0 00:04:21.484 SO libspdk_ut_mock.so.6.0 00:04:21.484 SO libspdk_log.so.7.0 00:04:21.484 SYMLINK libspdk_ut.so 00:04:21.484 SYMLINK libspdk_ut_mock.so 00:04:21.484 SYMLINK libspdk_log.so 00:04:21.484 CC lib/dma/dma.o 00:04:21.484 CC lib/ioat/ioat.o 00:04:21.484 CC lib/util/base64.o 00:04:21.484 CC lib/util/bit_array.o 00:04:21.484 CC lib/util/cpuset.o 00:04:21.484 CC lib/util/crc16.o 00:04:21.484 CC lib/util/crc32.o 00:04:21.484 CC lib/util/crc32c.o 00:04:21.484 CXX lib/trace_parser/trace.o 00:04:21.484 CC lib/vfio_user/host/vfio_user_pci.o 00:04:21.484 CC lib/util/crc32_ieee.o 00:04:21.484 CC lib/vfio_user/host/vfio_user.o 00:04:21.484 CC lib/util/crc64.o 00:04:21.484 CC lib/util/dif.o 00:04:21.484 LIB libspdk_dma.a 00:04:21.484 CC lib/util/fd.o 00:04:21.484 CC lib/util/file.o 00:04:21.484 SO libspdk_dma.so.4.0 00:04:21.484 LIB libspdk_ioat.a 00:04:21.484 CC lib/util/hexlify.o 00:04:21.484 SYMLINK libspdk_dma.so 00:04:21.484 CC lib/util/iov.o 00:04:21.484 CC lib/util/math.o 00:04:21.484 SO libspdk_ioat.so.7.0 00:04:21.484 CC lib/util/pipe.o 00:04:21.484 CC lib/util/strerror_tls.o 00:04:21.484 SYMLINK libspdk_ioat.so 00:04:21.484 CC lib/util/string.o 00:04:21.484 CC lib/util/uuid.o 00:04:21.484 LIB libspdk_vfio_user.a 00:04:21.484 SO libspdk_vfio_user.so.5.0 00:04:21.484 CC lib/util/fd_group.o 00:04:21.484 SYMLINK libspdk_vfio_user.so 00:04:21.484 CC lib/util/xor.o 00:04:21.484 CC lib/util/zipf.o 00:04:21.484 LIB libspdk_util.a 00:04:21.484 SO libspdk_util.so.9.1 00:04:21.484 LIB libspdk_trace_parser.a 00:04:21.484 SYMLINK libspdk_util.so 00:04:21.484 SO libspdk_trace_parser.so.5.0 00:04:21.484 SYMLINK libspdk_trace_parser.so 00:04:21.484 CC lib/conf/conf.o 00:04:21.484 CC lib/rdma_utils/rdma_utils.o 00:04:21.484 CC lib/vmd/vmd.o 00:04:21.484 CC lib/json/json_parse.o 00:04:21.484 CC lib/json/json_util.o 00:04:21.484 CC lib/vmd/led.o 00:04:21.484 CC lib/json/json_write.o 00:04:21.484 CC lib/idxd/idxd.o 00:04:21.484 CC lib/rdma_provider/common.o 00:04:21.484 CC lib/env_dpdk/env.o 00:04:21.484 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:21.484 CC lib/idxd/idxd_user.o 00:04:21.484 LIB libspdk_conf.a 00:04:21.484 CC lib/idxd/idxd_kernel.o 00:04:21.484 CC lib/env_dpdk/memory.o 00:04:21.484 SO libspdk_conf.so.6.0 00:04:21.484 LIB libspdk_rdma_utils.a 00:04:21.484 LIB libspdk_json.a 00:04:21.484 SO libspdk_rdma_utils.so.1.0 00:04:21.484 SYMLINK libspdk_conf.so 00:04:21.484 CC lib/env_dpdk/pci.o 00:04:21.484 SO libspdk_json.so.6.0 00:04:21.484 LIB libspdk_rdma_provider.a 00:04:21.484 SYMLINK libspdk_rdma_utils.so 00:04:21.484 CC lib/env_dpdk/init.o 00:04:21.484 SO libspdk_rdma_provider.so.6.0 00:04:21.484 SYMLINK libspdk_json.so 00:04:21.484 CC lib/env_dpdk/threads.o 00:04:21.484 CC lib/env_dpdk/pci_ioat.o 00:04:21.484 CC lib/env_dpdk/pci_virtio.o 00:04:21.484 SYMLINK libspdk_rdma_provider.so 00:04:21.484 LIB libspdk_idxd.a 00:04:21.484 CC lib/env_dpdk/pci_vmd.o 00:04:21.484 CC lib/env_dpdk/pci_idxd.o 00:04:21.484 CC lib/env_dpdk/pci_event.o 00:04:21.484 SO libspdk_idxd.so.12.0 00:04:21.484 CC lib/env_dpdk/sigbus_handler.o 00:04:21.484 CC lib/jsonrpc/jsonrpc_server.o 00:04:21.484 LIB libspdk_vmd.a 00:04:21.484 SYMLINK libspdk_idxd.so 00:04:21.484 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:21.484 SO libspdk_vmd.so.6.0 00:04:21.484 CC lib/env_dpdk/pci_dpdk.o 00:04:21.484 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:21.484 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:21.484 CC lib/jsonrpc/jsonrpc_client.o 00:04:21.484 SYMLINK libspdk_vmd.so 00:04:21.484 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:21.484 LIB libspdk_jsonrpc.a 00:04:21.484 SO libspdk_jsonrpc.so.6.0 00:04:21.484 SYMLINK libspdk_jsonrpc.so 00:04:21.484 CC lib/rpc/rpc.o 00:04:21.484 LIB libspdk_env_dpdk.a 00:04:21.484 SO libspdk_env_dpdk.so.14.1 00:04:21.484 LIB libspdk_rpc.a 00:04:21.484 SO libspdk_rpc.so.6.0 00:04:21.484 SYMLINK libspdk_env_dpdk.so 00:04:21.484 SYMLINK libspdk_rpc.so 00:04:21.484 CC lib/notify/notify.o 00:04:21.484 CC lib/notify/notify_rpc.o 00:04:21.484 CC lib/trace/trace.o 00:04:21.484 CC lib/trace/trace_rpc.o 00:04:21.484 CC lib/trace/trace_flags.o 00:04:21.484 CC lib/keyring/keyring.o 00:04:21.484 CC lib/keyring/keyring_rpc.o 00:04:21.484 LIB libspdk_notify.a 00:04:21.484 SO libspdk_notify.so.6.0 00:04:21.484 LIB libspdk_keyring.a 00:04:21.484 SO libspdk_keyring.so.1.0 00:04:21.484 SYMLINK libspdk_notify.so 00:04:21.484 LIB libspdk_trace.a 00:04:21.484 SYMLINK libspdk_keyring.so 00:04:21.484 SO libspdk_trace.so.10.0 00:04:21.484 SYMLINK libspdk_trace.so 00:04:21.741 CC lib/thread/thread.o 00:04:21.741 CC lib/thread/iobuf.o 00:04:21.741 CC lib/sock/sock.o 00:04:21.741 CC lib/sock/sock_rpc.o 00:04:22.306 LIB libspdk_sock.a 00:04:22.306 SO libspdk_sock.so.10.0 00:04:22.306 SYMLINK libspdk_sock.so 00:04:22.564 CC lib/nvme/nvme_ctrlr.o 00:04:22.564 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:22.564 CC lib/nvme/nvme_ns_cmd.o 00:04:22.564 CC lib/nvme/nvme_ns.o 00:04:22.564 CC lib/nvme/nvme_pcie.o 00:04:22.564 CC lib/nvme/nvme_pcie_common.o 00:04:22.564 CC lib/nvme/nvme_fabric.o 00:04:22.564 CC lib/nvme/nvme.o 00:04:22.564 CC lib/nvme/nvme_qpair.o 00:04:23.495 LIB libspdk_thread.a 00:04:23.495 SO libspdk_thread.so.10.1 00:04:23.495 CC lib/nvme/nvme_quirks.o 00:04:23.495 CC lib/nvme/nvme_transport.o 00:04:23.495 SYMLINK libspdk_thread.so 00:04:23.495 CC lib/nvme/nvme_discovery.o 00:04:23.495 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:23.752 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:23.752 CC lib/nvme/nvme_tcp.o 00:04:23.752 CC lib/accel/accel.o 00:04:23.752 CC lib/blob/blobstore.o 00:04:23.752 CC lib/blob/request.o 00:04:24.010 CC lib/blob/zeroes.o 00:04:24.267 CC lib/nvme/nvme_opal.o 00:04:24.267 CC lib/blob/blob_bs_dev.o 00:04:24.267 CC lib/init/json_config.o 00:04:24.267 CC lib/nvme/nvme_io_msg.o 00:04:24.268 CC lib/init/subsystem.o 00:04:24.268 CC lib/virtio/virtio.o 00:04:24.526 CC lib/virtio/virtio_vhost_user.o 00:04:24.526 CC lib/virtio/virtio_vfio_user.o 00:04:24.526 CC lib/init/subsystem_rpc.o 00:04:24.526 CC lib/accel/accel_rpc.o 00:04:24.526 CC lib/accel/accel_sw.o 00:04:24.526 CC lib/init/rpc.o 00:04:24.784 CC lib/nvme/nvme_poll_group.o 00:04:24.784 CC lib/virtio/virtio_pci.o 00:04:24.784 CC lib/nvme/nvme_zns.o 00:04:24.784 LIB libspdk_init.a 00:04:24.784 CC lib/nvme/nvme_stubs.o 00:04:24.784 CC lib/nvme/nvme_auth.o 00:04:24.784 SO libspdk_init.so.5.0 00:04:25.042 SYMLINK libspdk_init.so 00:04:25.042 CC lib/nvme/nvme_cuse.o 00:04:25.042 LIB libspdk_accel.a 00:04:25.042 LIB libspdk_virtio.a 00:04:25.042 SO libspdk_accel.so.15.1 00:04:25.042 SO libspdk_virtio.so.7.0 00:04:25.042 SYMLINK libspdk_virtio.so 00:04:25.042 CC lib/nvme/nvme_rdma.o 00:04:25.042 SYMLINK libspdk_accel.so 00:04:25.300 CC lib/event/app.o 00:04:25.300 CC lib/event/reactor.o 00:04:25.300 CC lib/bdev/bdev.o 00:04:25.300 CC lib/bdev/bdev_rpc.o 00:04:25.300 CC lib/bdev/bdev_zone.o 00:04:25.300 CC lib/event/log_rpc.o 00:04:25.559 CC lib/event/app_rpc.o 00:04:25.559 CC lib/bdev/part.o 00:04:25.559 CC lib/bdev/scsi_nvme.o 00:04:25.559 CC lib/event/scheduler_static.o 00:04:25.818 LIB libspdk_event.a 00:04:25.818 SO libspdk_event.so.14.0 00:04:26.077 SYMLINK libspdk_event.so 00:04:26.336 LIB libspdk_nvme.a 00:04:26.595 SO libspdk_nvme.so.13.1 00:04:26.595 LIB libspdk_blob.a 00:04:26.854 SO libspdk_blob.so.11.0 00:04:26.854 SYMLINK libspdk_blob.so 00:04:26.854 SYMLINK libspdk_nvme.so 00:04:27.113 CC lib/blobfs/blobfs.o 00:04:27.113 CC lib/blobfs/tree.o 00:04:27.113 CC lib/lvol/lvol.o 00:04:28.048 LIB libspdk_blobfs.a 00:04:28.048 LIB libspdk_bdev.a 00:04:28.048 SO libspdk_blobfs.so.10.0 00:04:28.048 SO libspdk_bdev.so.15.1 00:04:28.048 SYMLINK libspdk_blobfs.so 00:04:28.048 SYMLINK libspdk_bdev.so 00:04:28.048 LIB libspdk_lvol.a 00:04:28.048 SO libspdk_lvol.so.10.0 00:04:28.048 SYMLINK libspdk_lvol.so 00:04:28.307 CC lib/nbd/nbd.o 00:04:28.307 CC lib/ftl/ftl_core.o 00:04:28.307 CC lib/nvmf/ctrlr.o 00:04:28.307 CC lib/nbd/nbd_rpc.o 00:04:28.307 CC lib/ftl/ftl_layout.o 00:04:28.307 CC lib/ftl/ftl_init.o 00:04:28.307 CC lib/nvmf/ctrlr_discovery.o 00:04:28.307 CC lib/scsi/dev.o 00:04:28.307 CC lib/nvmf/ctrlr_bdev.o 00:04:28.307 CC lib/ublk/ublk.o 00:04:28.307 CC lib/ublk/ublk_rpc.o 00:04:28.307 CC lib/ftl/ftl_debug.o 00:04:28.565 CC lib/scsi/lun.o 00:04:28.565 CC lib/scsi/port.o 00:04:28.565 CC lib/scsi/scsi.o 00:04:28.565 LIB libspdk_nbd.a 00:04:28.565 SO libspdk_nbd.so.7.0 00:04:28.565 CC lib/ftl/ftl_io.o 00:04:28.565 CC lib/scsi/scsi_bdev.o 00:04:28.824 CC lib/scsi/scsi_pr.o 00:04:28.824 SYMLINK libspdk_nbd.so 00:04:28.824 CC lib/scsi/scsi_rpc.o 00:04:28.824 CC lib/nvmf/subsystem.o 00:04:28.824 CC lib/scsi/task.o 00:04:28.824 CC lib/ftl/ftl_sb.o 00:04:28.824 LIB libspdk_ublk.a 00:04:28.824 CC lib/ftl/ftl_l2p.o 00:04:28.824 CC lib/nvmf/nvmf.o 00:04:28.824 SO libspdk_ublk.so.3.0 00:04:28.824 CC lib/ftl/ftl_l2p_flat.o 00:04:29.082 SYMLINK libspdk_ublk.so 00:04:29.082 CC lib/ftl/ftl_nv_cache.o 00:04:29.082 CC lib/nvmf/nvmf_rpc.o 00:04:29.082 CC lib/ftl/ftl_band.o 00:04:29.082 CC lib/ftl/ftl_band_ops.o 00:04:29.082 CC lib/ftl/ftl_writer.o 00:04:29.082 CC lib/nvmf/transport.o 00:04:29.082 LIB libspdk_scsi.a 00:04:29.341 SO libspdk_scsi.so.9.0 00:04:29.341 SYMLINK libspdk_scsi.so 00:04:29.341 CC lib/nvmf/tcp.o 00:04:29.341 CC lib/nvmf/stubs.o 00:04:29.341 CC lib/ftl/ftl_rq.o 00:04:29.341 CC lib/nvmf/mdns_server.o 00:04:29.599 CC lib/nvmf/rdma.o 00:04:29.857 CC lib/nvmf/auth.o 00:04:29.857 CC lib/ftl/ftl_reloc.o 00:04:29.857 CC lib/ftl/ftl_l2p_cache.o 00:04:29.857 CC lib/ftl/ftl_p2l.o 00:04:29.857 CC lib/iscsi/conn.o 00:04:29.857 CC lib/iscsi/init_grp.o 00:04:29.857 CC lib/ftl/mngt/ftl_mngt.o 00:04:29.857 CC lib/vhost/vhost.o 00:04:30.115 CC lib/vhost/vhost_rpc.o 00:04:30.115 CC lib/iscsi/iscsi.o 00:04:30.373 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:30.373 CC lib/iscsi/md5.o 00:04:30.373 CC lib/vhost/vhost_scsi.o 00:04:30.373 CC lib/iscsi/param.o 00:04:30.373 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:30.631 CC lib/iscsi/portal_grp.o 00:04:30.631 CC lib/vhost/vhost_blk.o 00:04:30.631 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:30.631 CC lib/iscsi/tgt_node.o 00:04:30.889 CC lib/iscsi/iscsi_subsystem.o 00:04:30.889 CC lib/vhost/rte_vhost_user.o 00:04:30.889 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:30.889 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:30.889 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:31.147 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:31.147 CC lib/iscsi/iscsi_rpc.o 00:04:31.147 CC lib/iscsi/task.o 00:04:31.147 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:31.147 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:31.147 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:31.405 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:31.405 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:31.405 CC lib/ftl/utils/ftl_conf.o 00:04:31.405 CC lib/ftl/utils/ftl_md.o 00:04:31.405 CC lib/ftl/utils/ftl_mempool.o 00:04:31.405 CC lib/ftl/utils/ftl_bitmap.o 00:04:31.662 CC lib/ftl/utils/ftl_property.o 00:04:31.662 LIB libspdk_iscsi.a 00:04:31.662 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:31.662 LIB libspdk_nvmf.a 00:04:31.662 SO libspdk_iscsi.so.8.0 00:04:31.662 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:31.662 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:31.662 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:31.662 SO libspdk_nvmf.so.18.1 00:04:31.662 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:31.920 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:31.920 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:31.920 SYMLINK libspdk_iscsi.so 00:04:31.920 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:31.920 LIB libspdk_vhost.a 00:04:31.920 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:31.920 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:31.920 SO libspdk_vhost.so.8.0 00:04:31.920 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:31.920 SYMLINK libspdk_nvmf.so 00:04:31.920 CC lib/ftl/base/ftl_base_dev.o 00:04:31.920 CC lib/ftl/base/ftl_base_bdev.o 00:04:31.920 CC lib/ftl/ftl_trace.o 00:04:31.920 SYMLINK libspdk_vhost.so 00:04:32.178 LIB libspdk_ftl.a 00:04:32.436 SO libspdk_ftl.so.9.0 00:04:33.002 SYMLINK libspdk_ftl.so 00:04:33.259 CC module/env_dpdk/env_dpdk_rpc.o 00:04:33.259 CC module/blob/bdev/blob_bdev.o 00:04:33.259 CC module/accel/error/accel_error.o 00:04:33.259 CC module/accel/ioat/accel_ioat.o 00:04:33.259 CC module/scheduler/gscheduler/gscheduler.o 00:04:33.259 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:33.259 CC module/accel/dsa/accel_dsa.o 00:04:33.259 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:33.259 CC module/sock/posix/posix.o 00:04:33.259 CC module/keyring/file/keyring.o 00:04:33.259 LIB libspdk_env_dpdk_rpc.a 00:04:33.259 SO libspdk_env_dpdk_rpc.so.6.0 00:04:33.259 SYMLINK libspdk_env_dpdk_rpc.so 00:04:33.259 CC module/accel/dsa/accel_dsa_rpc.o 00:04:33.517 LIB libspdk_scheduler_gscheduler.a 00:04:33.517 LIB libspdk_scheduler_dpdk_governor.a 00:04:33.517 CC module/keyring/file/keyring_rpc.o 00:04:33.517 CC module/accel/error/accel_error_rpc.o 00:04:33.517 SO libspdk_scheduler_gscheduler.so.4.0 00:04:33.517 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:33.517 CC module/accel/ioat/accel_ioat_rpc.o 00:04:33.517 LIB libspdk_scheduler_dynamic.a 00:04:33.517 SO libspdk_scheduler_dynamic.so.4.0 00:04:33.517 SYMLINK libspdk_scheduler_gscheduler.so 00:04:33.517 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:33.517 LIB libspdk_accel_dsa.a 00:04:33.517 LIB libspdk_blob_bdev.a 00:04:33.517 SYMLINK libspdk_scheduler_dynamic.so 00:04:33.517 SO libspdk_blob_bdev.so.11.0 00:04:33.517 SO libspdk_accel_dsa.so.5.0 00:04:33.517 LIB libspdk_keyring_file.a 00:04:33.517 LIB libspdk_accel_error.a 00:04:33.517 LIB libspdk_accel_ioat.a 00:04:33.517 SO libspdk_keyring_file.so.1.0 00:04:33.517 SYMLINK libspdk_blob_bdev.so 00:04:33.517 SO libspdk_accel_error.so.2.0 00:04:33.517 SYMLINK libspdk_accel_dsa.so 00:04:33.517 SO libspdk_accel_ioat.so.6.0 00:04:33.775 SYMLINK libspdk_keyring_file.so 00:04:33.775 SYMLINK libspdk_accel_error.so 00:04:33.775 CC module/keyring/linux/keyring.o 00:04:33.775 CC module/keyring/linux/keyring_rpc.o 00:04:33.775 SYMLINK libspdk_accel_ioat.so 00:04:33.775 CC module/accel/iaa/accel_iaa.o 00:04:33.775 CC module/accel/iaa/accel_iaa_rpc.o 00:04:33.775 CC module/sock/uring/uring.o 00:04:33.775 LIB libspdk_keyring_linux.a 00:04:33.775 CC module/bdev/error/vbdev_error.o 00:04:33.775 SO libspdk_keyring_linux.so.1.0 00:04:33.775 CC module/bdev/gpt/gpt.o 00:04:33.775 CC module/bdev/delay/vbdev_delay.o 00:04:34.033 CC module/blobfs/bdev/blobfs_bdev.o 00:04:34.033 LIB libspdk_accel_iaa.a 00:04:34.033 SO libspdk_accel_iaa.so.3.0 00:04:34.033 SYMLINK libspdk_keyring_linux.so 00:04:34.033 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:34.033 LIB libspdk_sock_posix.a 00:04:34.033 CC module/bdev/malloc/bdev_malloc.o 00:04:34.033 CC module/bdev/lvol/vbdev_lvol.o 00:04:34.033 SYMLINK libspdk_accel_iaa.so 00:04:34.033 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:34.033 SO libspdk_sock_posix.so.6.0 00:04:34.033 CC module/bdev/gpt/vbdev_gpt.o 00:04:34.033 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:34.033 SYMLINK libspdk_sock_posix.so 00:04:34.033 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:34.033 LIB libspdk_blobfs_bdev.a 00:04:34.033 CC module/bdev/error/vbdev_error_rpc.o 00:04:34.291 SO libspdk_blobfs_bdev.so.6.0 00:04:34.291 SYMLINK libspdk_blobfs_bdev.so 00:04:34.291 LIB libspdk_bdev_delay.a 00:04:34.291 LIB libspdk_bdev_error.a 00:04:34.291 SO libspdk_bdev_delay.so.6.0 00:04:34.291 SO libspdk_bdev_error.so.6.0 00:04:34.291 LIB libspdk_bdev_gpt.a 00:04:34.291 LIB libspdk_bdev_malloc.a 00:04:34.291 LIB libspdk_sock_uring.a 00:04:34.291 SO libspdk_bdev_malloc.so.6.0 00:04:34.291 SO libspdk_bdev_gpt.so.6.0 00:04:34.550 SO libspdk_sock_uring.so.5.0 00:04:34.550 CC module/bdev/null/bdev_null.o 00:04:34.550 SYMLINK libspdk_bdev_error.so 00:04:34.550 SYMLINK libspdk_bdev_delay.so 00:04:34.550 CC module/bdev/nvme/bdev_nvme.o 00:04:34.550 CC module/bdev/null/bdev_null_rpc.o 00:04:34.550 SYMLINK libspdk_bdev_malloc.so 00:04:34.550 CC module/bdev/passthru/vbdev_passthru.o 00:04:34.550 SYMLINK libspdk_sock_uring.so 00:04:34.550 SYMLINK libspdk_bdev_gpt.so 00:04:34.550 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:34.550 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:34.550 LIB libspdk_bdev_lvol.a 00:04:34.550 SO libspdk_bdev_lvol.so.6.0 00:04:34.550 SYMLINK libspdk_bdev_lvol.so 00:04:34.550 CC module/bdev/nvme/nvme_rpc.o 00:04:34.808 CC module/bdev/raid/bdev_raid.o 00:04:34.808 CC module/bdev/split/vbdev_split.o 00:04:34.808 CC module/bdev/split/vbdev_split_rpc.o 00:04:34.808 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:34.808 LIB libspdk_bdev_null.a 00:04:34.808 LIB libspdk_bdev_passthru.a 00:04:34.808 SO libspdk_bdev_null.so.6.0 00:04:34.808 SO libspdk_bdev_passthru.so.6.0 00:04:34.808 CC module/bdev/uring/bdev_uring.o 00:04:34.808 SYMLINK libspdk_bdev_null.so 00:04:34.808 CC module/bdev/nvme/bdev_mdns_client.o 00:04:34.808 SYMLINK libspdk_bdev_passthru.so 00:04:34.808 CC module/bdev/nvme/vbdev_opal.o 00:04:34.808 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:34.808 CC module/bdev/uring/bdev_uring_rpc.o 00:04:34.808 LIB libspdk_bdev_split.a 00:04:35.067 SO libspdk_bdev_split.so.6.0 00:04:35.067 SYMLINK libspdk_bdev_split.so 00:04:35.067 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:35.067 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:35.067 CC module/bdev/raid/bdev_raid_rpc.o 00:04:35.067 CC module/bdev/raid/bdev_raid_sb.o 00:04:35.067 CC module/bdev/raid/raid0.o 00:04:35.067 CC module/bdev/aio/bdev_aio.o 00:04:35.325 LIB libspdk_bdev_uring.a 00:04:35.325 SO libspdk_bdev_uring.so.6.0 00:04:35.325 LIB libspdk_bdev_zone_block.a 00:04:35.325 SO libspdk_bdev_zone_block.so.6.0 00:04:35.325 CC module/bdev/raid/raid1.o 00:04:35.325 SYMLINK libspdk_bdev_uring.so 00:04:35.325 CC module/bdev/raid/concat.o 00:04:35.325 CC module/bdev/ftl/bdev_ftl.o 00:04:35.325 SYMLINK libspdk_bdev_zone_block.so 00:04:35.325 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:35.325 CC module/bdev/aio/bdev_aio_rpc.o 00:04:35.325 CC module/bdev/iscsi/bdev_iscsi.o 00:04:35.583 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:35.583 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:35.583 LIB libspdk_bdev_aio.a 00:04:35.583 SO libspdk_bdev_aio.so.6.0 00:04:35.583 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:35.583 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:35.583 LIB libspdk_bdev_ftl.a 00:04:35.583 LIB libspdk_bdev_raid.a 00:04:35.583 SYMLINK libspdk_bdev_aio.so 00:04:35.583 SO libspdk_bdev_ftl.so.6.0 00:04:35.583 SO libspdk_bdev_raid.so.6.0 00:04:35.841 SYMLINK libspdk_bdev_ftl.so 00:04:35.841 LIB libspdk_bdev_iscsi.a 00:04:35.841 SYMLINK libspdk_bdev_raid.so 00:04:35.841 SO libspdk_bdev_iscsi.so.6.0 00:04:35.841 SYMLINK libspdk_bdev_iscsi.so 00:04:36.100 LIB libspdk_bdev_virtio.a 00:04:36.100 SO libspdk_bdev_virtio.so.6.0 00:04:36.100 SYMLINK libspdk_bdev_virtio.so 00:04:36.667 LIB libspdk_bdev_nvme.a 00:04:36.667 SO libspdk_bdev_nvme.so.7.0 00:04:36.667 SYMLINK libspdk_bdev_nvme.so 00:04:37.233 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:37.233 CC module/event/subsystems/vmd/vmd.o 00:04:37.233 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:37.233 CC module/event/subsystems/keyring/keyring.o 00:04:37.233 CC module/event/subsystems/iobuf/iobuf.o 00:04:37.233 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:37.233 CC module/event/subsystems/scheduler/scheduler.o 00:04:37.233 CC module/event/subsystems/sock/sock.o 00:04:37.233 LIB libspdk_event_scheduler.a 00:04:37.490 LIB libspdk_event_keyring.a 00:04:37.490 LIB libspdk_event_vhost_blk.a 00:04:37.490 LIB libspdk_event_vmd.a 00:04:37.490 LIB libspdk_event_sock.a 00:04:37.490 LIB libspdk_event_iobuf.a 00:04:37.490 SO libspdk_event_scheduler.so.4.0 00:04:37.490 SO libspdk_event_vhost_blk.so.3.0 00:04:37.490 SO libspdk_event_keyring.so.1.0 00:04:37.490 SO libspdk_event_sock.so.5.0 00:04:37.490 SO libspdk_event_vmd.so.6.0 00:04:37.490 SO libspdk_event_iobuf.so.3.0 00:04:37.490 SYMLINK libspdk_event_scheduler.so 00:04:37.490 SYMLINK libspdk_event_keyring.so 00:04:37.490 SYMLINK libspdk_event_vhost_blk.so 00:04:37.490 SYMLINK libspdk_event_sock.so 00:04:37.490 SYMLINK libspdk_event_vmd.so 00:04:37.490 SYMLINK libspdk_event_iobuf.so 00:04:37.749 CC module/event/subsystems/accel/accel.o 00:04:37.749 LIB libspdk_event_accel.a 00:04:38.008 SO libspdk_event_accel.so.6.0 00:04:38.008 SYMLINK libspdk_event_accel.so 00:04:38.267 CC module/event/subsystems/bdev/bdev.o 00:04:38.525 LIB libspdk_event_bdev.a 00:04:38.525 SO libspdk_event_bdev.so.6.0 00:04:38.525 SYMLINK libspdk_event_bdev.so 00:04:38.789 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:38.789 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:38.789 CC module/event/subsystems/nbd/nbd.o 00:04:38.789 CC module/event/subsystems/scsi/scsi.o 00:04:38.789 CC module/event/subsystems/ublk/ublk.o 00:04:39.073 LIB libspdk_event_nbd.a 00:04:39.073 LIB libspdk_event_ublk.a 00:04:39.073 LIB libspdk_event_scsi.a 00:04:39.073 SO libspdk_event_nbd.so.6.0 00:04:39.073 SO libspdk_event_ublk.so.3.0 00:04:39.073 SO libspdk_event_scsi.so.6.0 00:04:39.073 SYMLINK libspdk_event_nbd.so 00:04:39.073 SYMLINK libspdk_event_ublk.so 00:04:39.073 LIB libspdk_event_nvmf.a 00:04:39.073 SYMLINK libspdk_event_scsi.so 00:04:39.073 SO libspdk_event_nvmf.so.6.0 00:04:39.073 SYMLINK libspdk_event_nvmf.so 00:04:39.351 CC module/event/subsystems/iscsi/iscsi.o 00:04:39.351 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:39.618 LIB libspdk_event_vhost_scsi.a 00:04:39.618 LIB libspdk_event_iscsi.a 00:04:39.618 SO libspdk_event_vhost_scsi.so.3.0 00:04:39.618 SO libspdk_event_iscsi.so.6.0 00:04:39.618 SYMLINK libspdk_event_vhost_scsi.so 00:04:39.618 SYMLINK libspdk_event_iscsi.so 00:04:39.877 SO libspdk.so.6.0 00:04:39.877 SYMLINK libspdk.so 00:04:40.136 CC app/trace_record/trace_record.o 00:04:40.136 TEST_HEADER include/spdk/accel.h 00:04:40.136 TEST_HEADER include/spdk/accel_module.h 00:04:40.136 CXX app/trace/trace.o 00:04:40.136 TEST_HEADER include/spdk/assert.h 00:04:40.136 TEST_HEADER include/spdk/barrier.h 00:04:40.136 TEST_HEADER include/spdk/base64.h 00:04:40.136 TEST_HEADER include/spdk/bdev.h 00:04:40.136 TEST_HEADER include/spdk/bdev_module.h 00:04:40.136 TEST_HEADER include/spdk/bdev_zone.h 00:04:40.136 TEST_HEADER include/spdk/bit_array.h 00:04:40.136 TEST_HEADER include/spdk/bit_pool.h 00:04:40.136 TEST_HEADER include/spdk/blob_bdev.h 00:04:40.136 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:40.136 TEST_HEADER include/spdk/blobfs.h 00:04:40.136 TEST_HEADER include/spdk/blob.h 00:04:40.136 TEST_HEADER include/spdk/conf.h 00:04:40.136 TEST_HEADER include/spdk/config.h 00:04:40.136 TEST_HEADER include/spdk/cpuset.h 00:04:40.136 TEST_HEADER include/spdk/crc16.h 00:04:40.136 TEST_HEADER include/spdk/crc32.h 00:04:40.136 TEST_HEADER include/spdk/crc64.h 00:04:40.136 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:40.136 TEST_HEADER include/spdk/dif.h 00:04:40.136 TEST_HEADER include/spdk/dma.h 00:04:40.136 TEST_HEADER include/spdk/endian.h 00:04:40.136 TEST_HEADER include/spdk/env_dpdk.h 00:04:40.136 TEST_HEADER include/spdk/env.h 00:04:40.136 TEST_HEADER include/spdk/event.h 00:04:40.136 TEST_HEADER include/spdk/fd_group.h 00:04:40.136 TEST_HEADER include/spdk/fd.h 00:04:40.136 TEST_HEADER include/spdk/file.h 00:04:40.136 TEST_HEADER include/spdk/ftl.h 00:04:40.136 TEST_HEADER include/spdk/gpt_spec.h 00:04:40.136 TEST_HEADER include/spdk/hexlify.h 00:04:40.136 TEST_HEADER include/spdk/histogram_data.h 00:04:40.136 TEST_HEADER include/spdk/idxd.h 00:04:40.136 TEST_HEADER include/spdk/idxd_spec.h 00:04:40.136 TEST_HEADER include/spdk/init.h 00:04:40.136 CC examples/ioat/perf/perf.o 00:04:40.136 TEST_HEADER include/spdk/ioat.h 00:04:40.136 CC examples/util/zipf/zipf.o 00:04:40.136 TEST_HEADER include/spdk/ioat_spec.h 00:04:40.136 CC test/thread/poller_perf/poller_perf.o 00:04:40.137 TEST_HEADER include/spdk/iscsi_spec.h 00:04:40.137 TEST_HEADER include/spdk/json.h 00:04:40.137 TEST_HEADER include/spdk/jsonrpc.h 00:04:40.137 TEST_HEADER include/spdk/keyring.h 00:04:40.137 TEST_HEADER include/spdk/keyring_module.h 00:04:40.137 TEST_HEADER include/spdk/likely.h 00:04:40.137 TEST_HEADER include/spdk/log.h 00:04:40.137 TEST_HEADER include/spdk/lvol.h 00:04:40.137 TEST_HEADER include/spdk/memory.h 00:04:40.137 TEST_HEADER include/spdk/mmio.h 00:04:40.137 TEST_HEADER include/spdk/nbd.h 00:04:40.137 TEST_HEADER include/spdk/notify.h 00:04:40.137 TEST_HEADER include/spdk/nvme.h 00:04:40.137 CC test/dma/test_dma/test_dma.o 00:04:40.137 TEST_HEADER include/spdk/nvme_intel.h 00:04:40.137 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:40.137 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:40.137 TEST_HEADER include/spdk/nvme_spec.h 00:04:40.137 TEST_HEADER include/spdk/nvme_zns.h 00:04:40.137 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:40.137 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:40.137 TEST_HEADER include/spdk/nvmf.h 00:04:40.137 TEST_HEADER include/spdk/nvmf_spec.h 00:04:40.137 TEST_HEADER include/spdk/nvmf_transport.h 00:04:40.137 TEST_HEADER include/spdk/opal.h 00:04:40.137 TEST_HEADER include/spdk/opal_spec.h 00:04:40.137 TEST_HEADER include/spdk/pci_ids.h 00:04:40.137 TEST_HEADER include/spdk/pipe.h 00:04:40.137 TEST_HEADER include/spdk/queue.h 00:04:40.137 CC test/app/bdev_svc/bdev_svc.o 00:04:40.137 TEST_HEADER include/spdk/reduce.h 00:04:40.137 TEST_HEADER include/spdk/rpc.h 00:04:40.137 TEST_HEADER include/spdk/scheduler.h 00:04:40.137 TEST_HEADER include/spdk/scsi.h 00:04:40.137 TEST_HEADER include/spdk/scsi_spec.h 00:04:40.137 TEST_HEADER include/spdk/sock.h 00:04:40.137 TEST_HEADER include/spdk/stdinc.h 00:04:40.137 TEST_HEADER include/spdk/string.h 00:04:40.137 TEST_HEADER include/spdk/thread.h 00:04:40.137 TEST_HEADER include/spdk/trace.h 00:04:40.137 TEST_HEADER include/spdk/trace_parser.h 00:04:40.137 TEST_HEADER include/spdk/tree.h 00:04:40.137 TEST_HEADER include/spdk/ublk.h 00:04:40.137 TEST_HEADER include/spdk/util.h 00:04:40.137 CC test/env/mem_callbacks/mem_callbacks.o 00:04:40.137 TEST_HEADER include/spdk/uuid.h 00:04:40.137 TEST_HEADER include/spdk/version.h 00:04:40.137 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:40.137 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:40.137 TEST_HEADER include/spdk/vhost.h 00:04:40.137 TEST_HEADER include/spdk/vmd.h 00:04:40.396 TEST_HEADER include/spdk/xor.h 00:04:40.396 TEST_HEADER include/spdk/zipf.h 00:04:40.396 CXX test/cpp_headers/accel.o 00:04:40.396 LINK poller_perf 00:04:40.396 LINK zipf 00:04:40.396 LINK interrupt_tgt 00:04:40.396 LINK spdk_trace_record 00:04:40.396 LINK ioat_perf 00:04:40.396 LINK bdev_svc 00:04:40.396 LINK mem_callbacks 00:04:40.396 CXX test/cpp_headers/accel_module.o 00:04:40.396 LINK spdk_trace 00:04:40.654 CC test/env/vtophys/vtophys.o 00:04:40.654 LINK test_dma 00:04:40.654 CC test/rpc_client/rpc_client_test.o 00:04:40.654 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:40.654 CC examples/ioat/verify/verify.o 00:04:40.654 CXX test/cpp_headers/assert.o 00:04:40.654 CC test/event/event_perf/event_perf.o 00:04:40.654 LINK vtophys 00:04:40.912 LINK env_dpdk_post_init 00:04:40.912 LINK rpc_client_test 00:04:40.912 CC app/nvmf_tgt/nvmf_main.o 00:04:40.912 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:40.912 CC examples/thread/thread/thread_ex.o 00:04:40.912 CXX test/cpp_headers/barrier.o 00:04:40.912 LINK event_perf 00:04:40.912 LINK verify 00:04:40.912 CC test/app/histogram_perf/histogram_perf.o 00:04:40.912 CC test/app/jsoncat/jsoncat.o 00:04:40.912 LINK nvmf_tgt 00:04:40.912 CXX test/cpp_headers/base64.o 00:04:41.169 CC test/env/memory/memory_ut.o 00:04:41.169 LINK thread 00:04:41.169 LINK histogram_perf 00:04:41.169 CC test/event/reactor/reactor.o 00:04:41.169 CC test/event/reactor_perf/reactor_perf.o 00:04:41.169 CC examples/sock/hello_world/hello_sock.o 00:04:41.169 LINK jsoncat 00:04:41.169 CXX test/cpp_headers/bdev.o 00:04:41.169 LINK nvme_fuzz 00:04:41.169 LINK reactor 00:04:41.169 LINK reactor_perf 00:04:41.427 CC app/iscsi_tgt/iscsi_tgt.o 00:04:41.427 CXX test/cpp_headers/bdev_module.o 00:04:41.427 CC app/spdk_lspci/spdk_lspci.o 00:04:41.427 CC test/env/pci/pci_ut.o 00:04:41.427 CC app/spdk_tgt/spdk_tgt.o 00:04:41.427 LINK hello_sock 00:04:41.427 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:41.427 CC test/app/stub/stub.o 00:04:41.427 CC test/event/app_repeat/app_repeat.o 00:04:41.427 LINK spdk_lspci 00:04:41.684 LINK iscsi_tgt 00:04:41.684 CXX test/cpp_headers/bdev_zone.o 00:04:41.684 LINK spdk_tgt 00:04:41.684 LINK stub 00:04:41.684 LINK app_repeat 00:04:41.684 CC examples/vmd/lsvmd/lsvmd.o 00:04:41.684 LINK memory_ut 00:04:41.684 LINK pci_ut 00:04:41.684 CXX test/cpp_headers/bit_array.o 00:04:41.684 CC examples/vmd/led/led.o 00:04:41.942 LINK lsvmd 00:04:41.942 CC examples/idxd/perf/perf.o 00:04:41.942 LINK led 00:04:41.942 CXX test/cpp_headers/bit_pool.o 00:04:41.942 CC app/spdk_nvme_perf/perf.o 00:04:41.942 CC test/event/scheduler/scheduler.o 00:04:41.942 CC examples/accel/perf/accel_perf.o 00:04:42.200 CC app/spdk_nvme_identify/identify.o 00:04:42.200 CXX test/cpp_headers/blob_bdev.o 00:04:42.200 CC app/spdk_nvme_discover/discovery_aer.o 00:04:42.200 CC examples/blob/hello_world/hello_blob.o 00:04:42.200 LINK scheduler 00:04:42.200 LINK idxd_perf 00:04:42.200 CC test/accel/dif/dif.o 00:04:42.458 LINK spdk_nvme_discover 00:04:42.458 CXX test/cpp_headers/blobfs_bdev.o 00:04:42.458 LINK hello_blob 00:04:42.458 LINK accel_perf 00:04:42.717 CC examples/blob/cli/blobcli.o 00:04:42.717 CXX test/cpp_headers/blobfs.o 00:04:42.717 CC app/spdk_top/spdk_top.o 00:04:42.717 CC test/blobfs/mkfs/mkfs.o 00:04:42.717 CXX test/cpp_headers/blob.o 00:04:42.717 LINK dif 00:04:42.717 LINK spdk_nvme_perf 00:04:42.976 CC app/vhost/vhost.o 00:04:42.976 LINK mkfs 00:04:42.976 LINK spdk_nvme_identify 00:04:42.976 CC test/lvol/esnap/esnap.o 00:04:42.976 CXX test/cpp_headers/conf.o 00:04:42.976 LINK vhost 00:04:42.976 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:42.976 CXX test/cpp_headers/config.o 00:04:42.976 LINK iscsi_fuzz 00:04:43.235 LINK blobcli 00:04:43.235 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:43.235 CXX test/cpp_headers/cpuset.o 00:04:43.235 CC test/nvme/aer/aer.o 00:04:43.235 CXX test/cpp_headers/crc16.o 00:04:43.235 CC test/bdev/bdevio/bdevio.o 00:04:43.494 CC app/spdk_dd/spdk_dd.o 00:04:43.494 CXX test/cpp_headers/crc32.o 00:04:43.494 CC test/nvme/reset/reset.o 00:04:43.494 LINK aer 00:04:43.494 CC app/fio/nvme/fio_plugin.o 00:04:43.494 CC examples/nvme/hello_world/hello_world.o 00:04:43.494 LINK spdk_top 00:04:43.494 LINK vhost_fuzz 00:04:43.494 CXX test/cpp_headers/crc64.o 00:04:43.753 CC test/nvme/sgl/sgl.o 00:04:43.753 LINK reset 00:04:43.753 LINK bdevio 00:04:43.753 LINK hello_world 00:04:43.753 CXX test/cpp_headers/dif.o 00:04:43.753 CC examples/nvme/reconnect/reconnect.o 00:04:43.753 CC test/nvme/e2edp/nvme_dp.o 00:04:43.753 LINK spdk_dd 00:04:44.012 CXX test/cpp_headers/dma.o 00:04:44.012 CC test/nvme/overhead/overhead.o 00:04:44.012 LINK sgl 00:04:44.012 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:44.012 CC app/fio/bdev/fio_plugin.o 00:04:44.012 LINK spdk_nvme 00:04:44.012 LINK nvme_dp 00:04:44.012 CXX test/cpp_headers/endian.o 00:04:44.012 LINK reconnect 00:04:44.271 CC test/nvme/err_injection/err_injection.o 00:04:44.271 CC examples/nvme/arbitration/arbitration.o 00:04:44.271 LINK overhead 00:04:44.271 CXX test/cpp_headers/env_dpdk.o 00:04:44.271 CC test/nvme/startup/startup.o 00:04:44.271 LINK err_injection 00:04:44.271 CC examples/bdev/hello_world/hello_bdev.o 00:04:44.271 CC examples/nvme/hotplug/hotplug.o 00:04:44.530 CXX test/cpp_headers/env.o 00:04:44.530 LINK nvme_manage 00:04:44.530 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:44.530 LINK startup 00:04:44.530 LINK spdk_bdev 00:04:44.530 LINK arbitration 00:04:44.530 CXX test/cpp_headers/event.o 00:04:44.530 CC test/nvme/reserve/reserve.o 00:04:44.530 LINK hello_bdev 00:04:44.530 LINK hotplug 00:04:44.530 CXX test/cpp_headers/fd_group.o 00:04:44.790 CXX test/cpp_headers/fd.o 00:04:44.790 LINK cmb_copy 00:04:44.790 CC examples/bdev/bdevperf/bdevperf.o 00:04:44.790 LINK reserve 00:04:44.790 CC test/nvme/simple_copy/simple_copy.o 00:04:44.790 CXX test/cpp_headers/file.o 00:04:44.790 CC examples/nvme/abort/abort.o 00:04:44.790 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:44.790 CC test/nvme/boot_partition/boot_partition.o 00:04:45.049 CC test/nvme/connect_stress/connect_stress.o 00:04:45.049 CC test/nvme/compliance/nvme_compliance.o 00:04:45.049 CXX test/cpp_headers/ftl.o 00:04:45.049 CC test/nvme/fused_ordering/fused_ordering.o 00:04:45.049 LINK simple_copy 00:04:45.049 LINK boot_partition 00:04:45.049 LINK pmr_persistence 00:04:45.049 LINK connect_stress 00:04:45.308 CXX test/cpp_headers/gpt_spec.o 00:04:45.308 LINK abort 00:04:45.308 LINK nvme_compliance 00:04:45.308 CXX test/cpp_headers/hexlify.o 00:04:45.308 CXX test/cpp_headers/histogram_data.o 00:04:45.308 LINK fused_ordering 00:04:45.308 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:45.308 CC test/nvme/fdp/fdp.o 00:04:45.308 CXX test/cpp_headers/idxd.o 00:04:45.568 CXX test/cpp_headers/idxd_spec.o 00:04:45.568 CXX test/cpp_headers/init.o 00:04:45.568 CXX test/cpp_headers/ioat.o 00:04:45.568 CXX test/cpp_headers/ioat_spec.o 00:04:45.568 CC test/nvme/cuse/cuse.o 00:04:45.568 LINK doorbell_aers 00:04:45.568 CXX test/cpp_headers/iscsi_spec.o 00:04:45.568 LINK bdevperf 00:04:45.568 CXX test/cpp_headers/json.o 00:04:45.568 CXX test/cpp_headers/jsonrpc.o 00:04:45.568 CXX test/cpp_headers/keyring.o 00:04:45.568 CXX test/cpp_headers/keyring_module.o 00:04:45.568 LINK fdp 00:04:45.827 CXX test/cpp_headers/likely.o 00:04:45.827 CXX test/cpp_headers/log.o 00:04:45.827 CXX test/cpp_headers/lvol.o 00:04:45.827 CXX test/cpp_headers/memory.o 00:04:45.827 CXX test/cpp_headers/mmio.o 00:04:45.827 CXX test/cpp_headers/nbd.o 00:04:45.827 CXX test/cpp_headers/notify.o 00:04:45.827 CXX test/cpp_headers/nvme.o 00:04:45.827 CXX test/cpp_headers/nvme_intel.o 00:04:45.827 CXX test/cpp_headers/nvme_ocssd.o 00:04:46.087 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:46.087 CXX test/cpp_headers/nvme_spec.o 00:04:46.087 CXX test/cpp_headers/nvme_zns.o 00:04:46.087 CC examples/nvmf/nvmf/nvmf.o 00:04:46.087 CXX test/cpp_headers/nvmf_cmd.o 00:04:46.087 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:46.087 CXX test/cpp_headers/nvmf.o 00:04:46.087 CXX test/cpp_headers/nvmf_spec.o 00:04:46.087 CXX test/cpp_headers/nvmf_transport.o 00:04:46.087 CXX test/cpp_headers/opal.o 00:04:46.087 CXX test/cpp_headers/opal_spec.o 00:04:46.346 CXX test/cpp_headers/pci_ids.o 00:04:46.346 CXX test/cpp_headers/pipe.o 00:04:46.346 CXX test/cpp_headers/queue.o 00:04:46.346 CXX test/cpp_headers/reduce.o 00:04:46.346 LINK nvmf 00:04:46.346 CXX test/cpp_headers/rpc.o 00:04:46.346 CXX test/cpp_headers/scheduler.o 00:04:46.346 CXX test/cpp_headers/scsi.o 00:04:46.346 CXX test/cpp_headers/scsi_spec.o 00:04:46.346 CXX test/cpp_headers/sock.o 00:04:46.346 CXX test/cpp_headers/stdinc.o 00:04:46.347 CXX test/cpp_headers/string.o 00:04:46.606 CXX test/cpp_headers/thread.o 00:04:46.606 CXX test/cpp_headers/trace.o 00:04:46.606 CXX test/cpp_headers/trace_parser.o 00:04:46.606 CXX test/cpp_headers/tree.o 00:04:46.606 CXX test/cpp_headers/ublk.o 00:04:46.606 CXX test/cpp_headers/util.o 00:04:46.606 CXX test/cpp_headers/uuid.o 00:04:46.606 CXX test/cpp_headers/version.o 00:04:46.606 CXX test/cpp_headers/vfio_user_pci.o 00:04:46.606 CXX test/cpp_headers/vfio_user_spec.o 00:04:46.606 CXX test/cpp_headers/vhost.o 00:04:46.606 CXX test/cpp_headers/vmd.o 00:04:46.606 CXX test/cpp_headers/xor.o 00:04:46.606 CXX test/cpp_headers/zipf.o 00:04:46.866 LINK cuse 00:04:47.804 LINK esnap 00:04:48.063 00:04:48.063 real 0m53.614s 00:04:48.063 user 5m3.360s 00:04:48.063 sys 1m2.131s 00:04:48.063 05:51:39 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:48.063 05:51:39 make -- common/autotest_common.sh@10 -- $ set +x 00:04:48.063 ************************************ 00:04:48.063 END TEST make 00:04:48.063 ************************************ 00:04:48.323 05:51:39 -- common/autotest_common.sh@1142 -- $ return 0 00:04:48.323 05:51:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:48.323 05:51:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:48.323 05:51:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:48.323 05:51:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.323 05:51:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:48.323 05:51:39 -- pm/common@44 -- $ pid=5936 00:04:48.323 05:51:39 -- pm/common@50 -- $ kill -TERM 5936 00:04:48.323 05:51:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.323 05:51:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:48.323 05:51:39 -- pm/common@44 -- $ pid=5938 00:04:48.323 05:51:39 -- pm/common@50 -- $ kill -TERM 5938 00:04:48.323 05:51:39 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:48.323 05:51:39 -- nvmf/common.sh@7 -- # uname -s 00:04:48.323 05:51:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.323 05:51:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.323 05:51:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.323 05:51:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.323 05:51:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.323 05:51:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.323 05:51:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.323 05:51:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.323 05:51:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.323 05:51:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.323 05:51:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:04:48.323 05:51:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:04:48.323 05:51:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.323 05:51:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.323 05:51:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:48.323 05:51:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.323 05:51:39 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:48.323 05:51:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.323 05:51:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.323 05:51:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.323 05:51:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.323 05:51:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.323 05:51:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.323 05:51:39 -- paths/export.sh@5 -- # export PATH 00:04:48.323 05:51:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.323 05:51:39 -- nvmf/common.sh@47 -- # : 0 00:04:48.323 05:51:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:48.323 05:51:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:48.323 05:51:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.323 05:51:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.323 05:51:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.323 05:51:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:48.324 05:51:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:48.324 05:51:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:48.324 05:51:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:48.324 05:51:39 -- spdk/autotest.sh@32 -- # uname -s 00:04:48.324 05:51:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:48.324 05:51:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:48.324 05:51:39 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:48.324 05:51:39 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:48.324 05:51:39 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:48.324 05:51:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:48.324 05:51:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:48.324 05:51:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:48.324 05:51:39 -- spdk/autotest.sh@48 -- # udevadm_pid=64930 00:04:48.324 05:51:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:48.324 05:51:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:48.324 05:51:39 -- pm/common@17 -- # local monitor 00:04:48.324 05:51:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.324 05:51:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.324 05:51:39 -- pm/common@25 -- # sleep 1 00:04:48.324 05:51:39 -- pm/common@21 -- # date +%s 00:04:48.324 05:51:39 -- pm/common@21 -- # date +%s 00:04:48.324 05:51:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720849899 00:04:48.324 05:51:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720849899 00:04:48.324 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720849899_collect-vmstat.pm.log 00:04:48.324 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720849899_collect-cpu-load.pm.log 00:04:49.261 05:51:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:49.261 05:51:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:49.261 05:51:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.261 05:51:40 -- common/autotest_common.sh@10 -- # set +x 00:04:49.520 05:51:40 -- spdk/autotest.sh@59 -- # create_test_list 00:04:49.520 05:51:40 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:49.520 05:51:40 -- common/autotest_common.sh@10 -- # set +x 00:04:49.520 05:51:41 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:49.520 05:51:41 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:49.520 05:51:41 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:49.520 05:51:41 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:49.520 05:51:41 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:49.520 05:51:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:49.520 05:51:41 -- common/autotest_common.sh@1455 -- # uname 00:04:49.520 05:51:41 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:49.520 05:51:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:49.520 05:51:41 -- common/autotest_common.sh@1475 -- # uname 00:04:49.520 05:51:41 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:49.520 05:51:41 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:49.520 05:51:41 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:49.520 05:51:41 -- spdk/autotest.sh@72 -- # hash lcov 00:04:49.520 05:51:41 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:49.520 05:51:41 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:49.520 --rc lcov_branch_coverage=1 00:04:49.520 --rc lcov_function_coverage=1 00:04:49.520 --rc genhtml_branch_coverage=1 00:04:49.520 --rc genhtml_function_coverage=1 00:04:49.520 --rc genhtml_legend=1 00:04:49.520 --rc geninfo_all_blocks=1 00:04:49.520 ' 00:04:49.520 05:51:41 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:49.520 --rc lcov_branch_coverage=1 00:04:49.520 --rc lcov_function_coverage=1 00:04:49.520 --rc genhtml_branch_coverage=1 00:04:49.520 --rc genhtml_function_coverage=1 00:04:49.520 --rc genhtml_legend=1 00:04:49.520 --rc geninfo_all_blocks=1 00:04:49.520 ' 00:04:49.520 05:51:41 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:49.520 --rc lcov_branch_coverage=1 00:04:49.520 --rc lcov_function_coverage=1 00:04:49.520 --rc genhtml_branch_coverage=1 00:04:49.520 --rc genhtml_function_coverage=1 00:04:49.520 --rc genhtml_legend=1 00:04:49.520 --rc geninfo_all_blocks=1 00:04:49.520 --no-external' 00:04:49.520 05:51:41 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:49.520 --rc lcov_branch_coverage=1 00:04:49.520 --rc lcov_function_coverage=1 00:04:49.520 --rc genhtml_branch_coverage=1 00:04:49.520 --rc genhtml_function_coverage=1 00:04:49.520 --rc genhtml_legend=1 00:04:49.520 --rc geninfo_all_blocks=1 00:04:49.520 --no-external' 00:04:49.520 05:51:41 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:49.520 lcov: LCOV version 1.14 00:04:49.520 05:51:41 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:04.403 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:04.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:14.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:14.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:14.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:14.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:16.914 05:52:08 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:16.914 05:52:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.914 05:52:08 -- common/autotest_common.sh@10 -- # set +x 00:05:16.914 05:52:08 -- spdk/autotest.sh@91 -- # rm -f 00:05:16.914 05:52:08 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.482 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.741 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:17.741 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:17.741 05:52:09 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:17.741 05:52:09 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:17.741 05:52:09 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:17.741 05:52:09 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:17.741 05:52:09 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.741 05:52:09 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:17.741 05:52:09 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:17.741 05:52:09 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:17.741 05:52:09 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.741 05:52:09 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.741 05:52:09 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:05:17.741 05:52:09 -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:05:17.741 05:52:09 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:17.741 05:52:09 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.741 05:52:09 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.741 05:52:09 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:05:17.741 05:52:09 -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:05:17.741 05:52:09 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:17.741 05:52:09 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.741 05:52:09 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.741 05:52:09 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:17.741 05:52:09 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:17.741 05:52:09 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:17.741 05:52:09 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.741 05:52:09 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:17.741 05:52:09 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.741 05:52:09 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:17.741 05:52:09 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:17.741 05:52:09 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:17.741 05:52:09 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:17.741 No valid GPT data, bailing 00:05:17.741 05:52:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:17.741 05:52:09 -- scripts/common.sh@391 -- # pt= 00:05:17.741 05:52:09 -- scripts/common.sh@392 -- # return 1 00:05:17.741 05:52:09 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:17.741 1+0 records in 00:05:17.741 1+0 records out 00:05:17.741 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00365058 s, 287 MB/s 00:05:17.741 05:52:09 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.741 05:52:09 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:17.741 05:52:09 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n2 00:05:17.741 05:52:09 -- scripts/common.sh@378 -- # local block=/dev/nvme0n2 pt 00:05:17.741 05:52:09 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:05:17.741 No valid GPT data, bailing 00:05:17.741 05:52:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:17.741 05:52:09 -- scripts/common.sh@391 -- # pt= 00:05:17.741 05:52:09 -- scripts/common.sh@392 -- # return 1 00:05:17.741 05:52:09 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:05:17.741 1+0 records in 00:05:17.741 1+0 records out 00:05:17.741 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00598439 s, 175 MB/s 00:05:17.741 05:52:09 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.741 05:52:09 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:17.741 05:52:09 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n3 00:05:17.741 05:52:09 -- scripts/common.sh@378 -- # local block=/dev/nvme0n3 pt 00:05:17.741 05:52:09 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:05:18.000 No valid GPT data, bailing 00:05:18.000 05:52:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:18.000 05:52:09 -- scripts/common.sh@391 -- # pt= 00:05:18.000 05:52:09 -- scripts/common.sh@392 -- # return 1 00:05:18.000 05:52:09 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:05:18.000 1+0 records in 00:05:18.000 1+0 records out 00:05:18.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421475 s, 249 MB/s 00:05:18.000 05:52:09 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:18.000 05:52:09 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:18.000 05:52:09 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:18.000 05:52:09 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:18.000 05:52:09 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:18.000 No valid GPT data, bailing 00:05:18.000 05:52:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:18.000 05:52:09 -- scripts/common.sh@391 -- # pt= 00:05:18.000 05:52:09 -- scripts/common.sh@392 -- # return 1 00:05:18.000 05:52:09 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:18.000 1+0 records in 00:05:18.000 1+0 records out 00:05:18.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00406394 s, 258 MB/s 00:05:18.000 05:52:09 -- spdk/autotest.sh@118 -- # sync 00:05:18.259 05:52:09 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:18.259 05:52:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:18.259 05:52:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:20.158 05:52:11 -- spdk/autotest.sh@124 -- # uname -s 00:05:20.158 05:52:11 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:20.158 05:52:11 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:20.158 05:52:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.158 05:52:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.158 05:52:11 -- common/autotest_common.sh@10 -- # set +x 00:05:20.158 ************************************ 00:05:20.158 START TEST setup.sh 00:05:20.158 ************************************ 00:05:20.158 05:52:11 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:20.158 * Looking for test storage... 00:05:20.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:20.158 05:52:11 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:20.158 05:52:11 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:20.158 05:52:11 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:20.158 05:52:11 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.158 05:52:11 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.158 05:52:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:20.158 ************************************ 00:05:20.158 START TEST acl 00:05:20.158 ************************************ 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:20.158 * Looking for test storage... 00:05:20.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:20.158 05:52:11 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:20.158 05:52:11 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:20.159 05:52:11 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:20.159 05:52:11 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:20.159 05:52:11 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:20.159 05:52:11 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:20.159 05:52:11 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:20.159 05:52:11 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:20.159 05:52:11 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:21.094 05:52:12 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:21.094 05:52:12 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:21.094 05:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.094 05:52:12 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:21.094 05:52:12 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.094 05:52:12 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.707 Hugepages 00:05:21.707 node hugesize free / total 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.707 00:05:21.707 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:21.707 05:52:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.966 05:52:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:21.966 05:52:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:21.966 05:52:13 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:21.966 05:52:13 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:21.966 05:52:13 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:21.966 05:52:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.966 05:52:13 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:21.966 05:52:13 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:21.966 05:52:13 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.966 05:52:13 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.966 05:52:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:21.966 ************************************ 00:05:21.966 START TEST denied 00:05:21.966 ************************************ 00:05:21.966 05:52:13 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:21.966 05:52:13 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:21.966 05:52:13 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:21.966 05:52:13 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.966 05:52:13 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:21.966 05:52:13 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.910 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:22.910 05:52:14 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:22.910 05:52:14 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:22.910 05:52:14 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:22.910 05:52:14 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:22.910 05:52:14 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:22.910 05:52:14 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:22.910 05:52:14 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:22.910 05:52:14 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:22.910 05:52:14 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:22.910 05:52:14 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.476 00:05:23.476 real 0m1.463s 00:05:23.476 user 0m0.637s 00:05:23.476 sys 0m0.782s 00:05:23.476 05:52:14 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.476 05:52:14 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:23.476 ************************************ 00:05:23.476 END TEST denied 00:05:23.476 ************************************ 00:05:23.476 05:52:14 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:23.476 05:52:14 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:23.476 05:52:14 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.476 05:52:14 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.476 05:52:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:23.476 ************************************ 00:05:23.476 START TEST allowed 00:05:23.476 ************************************ 00:05:23.476 05:52:15 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:23.476 05:52:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:23.476 05:52:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:23.476 05:52:15 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:23.476 05:52:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.476 05:52:15 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:24.412 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.412 05:52:15 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:24.412 05:52:15 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:24.412 05:52:15 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:24.412 05:52:15 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:24.412 05:52:15 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:24.412 05:52:15 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:24.412 05:52:15 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:24.412 05:52:15 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:24.412 05:52:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:24.412 05:52:15 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.980 00:05:24.980 real 0m1.522s 00:05:24.980 user 0m0.699s 00:05:24.980 sys 0m0.817s 00:05:24.980 05:52:16 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.980 05:52:16 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:24.980 ************************************ 00:05:24.980 END TEST allowed 00:05:24.980 ************************************ 00:05:24.981 05:52:16 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:24.981 00:05:24.981 real 0m4.779s 00:05:24.981 user 0m2.173s 00:05:24.981 sys 0m2.566s 00:05:24.981 05:52:16 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.981 ************************************ 00:05:24.981 END TEST acl 00:05:24.981 05:52:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:24.981 ************************************ 00:05:24.981 05:52:16 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:24.981 05:52:16 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:24.981 05:52:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.981 05:52:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.981 05:52:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:24.981 ************************************ 00:05:24.981 START TEST hugepages 00:05:24.981 ************************************ 00:05:24.981 05:52:16 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:24.981 * Looking for test storage... 00:05:24.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 4864028 kB' 'MemAvailable: 7381048 kB' 'Buffers: 2436 kB' 'Cached: 2721888 kB' 'SwapCached: 0 kB' 'Active: 435964 kB' 'Inactive: 2392932 kB' 'Active(anon): 115064 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 106436 kB' 'Mapped: 48868 kB' 'Shmem: 10492 kB' 'KReclaimable: 80256 kB' 'Slab: 157600 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 77344 kB' 'KernelStack: 6556 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 345996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.981 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.240 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.240 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.240 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.240 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.240 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.240 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.240 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.240 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.241 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:25.242 05:52:16 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:25.242 05:52:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.242 05:52:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.242 05:52:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:25.242 ************************************ 00:05:25.242 START TEST default_setup 00:05:25.242 ************************************ 00:05:25.242 05:52:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:25.242 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.243 05:52:16 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.810 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.810 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.075 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6973260 kB' 'MemAvailable: 9490188 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 453240 kB' 'Inactive: 2392948 kB' 'Active(anon): 132340 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123476 kB' 'Mapped: 48972 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157264 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77228 kB' 'KernelStack: 6512 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.075 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.076 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6973856 kB' 'MemAvailable: 9490784 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 452904 kB' 'Inactive: 2392948 kB' 'Active(anon): 132004 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123108 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157260 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77224 kB' 'KernelStack: 6496 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.077 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.078 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6973856 kB' 'MemAvailable: 9490784 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 452808 kB' 'Inactive: 2392948 kB' 'Active(anon): 131908 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123016 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157252 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77216 kB' 'KernelStack: 6496 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.079 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.080 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:26.081 nr_hugepages=1024 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:26.081 resv_hugepages=0 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:26.081 surplus_hugepages=0 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:26.081 anon_hugepages=0 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6974112 kB' 'MemAvailable: 9491040 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 452768 kB' 'Inactive: 2392948 kB' 'Active(anon): 131868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122968 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157252 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77216 kB' 'KernelStack: 6496 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.081 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.082 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6973860 kB' 'MemUsed: 5268112 kB' 'SwapCached: 0 kB' 'Active: 452736 kB' 'Inactive: 2392948 kB' 'Active(anon): 131836 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 2724320 kB' 'Mapped: 48868 kB' 'AnonPages: 123028 kB' 'Shmem: 10468 kB' 'KernelStack: 6496 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80036 kB' 'Slab: 157252 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:26.083 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.084 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:26.085 node0=1024 expecting 1024 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:26.085 00:05:26.085 real 0m1.025s 00:05:26.085 user 0m0.487s 00:05:26.085 sys 0m0.491s 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.085 05:52:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:26.085 ************************************ 00:05:26.085 END TEST default_setup 00:05:26.085 ************************************ 00:05:26.345 05:52:17 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:26.345 05:52:17 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:26.345 05:52:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.345 05:52:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.345 05:52:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:26.345 ************************************ 00:05:26.345 START TEST per_node_1G_alloc 00:05:26.345 ************************************ 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.345 05:52:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.607 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.607 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.607 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8015676 kB' 'MemAvailable: 10532608 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 453580 kB' 'Inactive: 2392952 kB' 'Active(anon): 132680 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123840 kB' 'Mapped: 49112 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157360 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77324 kB' 'KernelStack: 6500 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.608 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.609 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8015676 kB' 'MemAvailable: 10532608 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 453192 kB' 'Inactive: 2392952 kB' 'Active(anon): 132292 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123480 kB' 'Mapped: 48992 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157376 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77340 kB' 'KernelStack: 6500 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.610 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.611 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8015676 kB' 'MemAvailable: 10532608 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 452900 kB' 'Inactive: 2392952 kB' 'Active(anon): 132000 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123184 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157360 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77324 kB' 'KernelStack: 6512 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.612 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.613 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:26.614 nr_hugepages=512 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:26.614 resv_hugepages=0 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:26.614 surplus_hugepages=0 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:26.614 anon_hugepages=0 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8016356 kB' 'MemAvailable: 10533288 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 452832 kB' 'Inactive: 2392952 kB' 'Active(anon): 131932 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123064 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157360 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77324 kB' 'KernelStack: 6496 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.614 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.615 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.876 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.877 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8016356 kB' 'MemUsed: 4225616 kB' 'SwapCached: 0 kB' 'Active: 452952 kB' 'Inactive: 2392952 kB' 'Active(anon): 132052 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 2724320 kB' 'Mapped: 48868 kB' 'AnonPages: 123192 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80036 kB' 'Slab: 157360 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.878 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.879 node0=512 expecting 512 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:26.879 00:05:26.879 real 0m0.544s 00:05:26.879 user 0m0.290s 00:05:26.879 sys 0m0.290s 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.879 05:52:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:26.879 ************************************ 00:05:26.879 END TEST per_node_1G_alloc 00:05:26.879 ************************************ 00:05:26.879 05:52:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:26.879 05:52:18 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:26.879 05:52:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.879 05:52:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.879 05:52:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:26.879 ************************************ 00:05:26.879 START TEST even_2G_alloc 00:05:26.879 ************************************ 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.879 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.140 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.140 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970752 kB' 'MemAvailable: 9487684 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 453024 kB' 'Inactive: 2392952 kB' 'Active(anon): 132124 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123340 kB' 'Mapped: 48996 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157368 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77332 kB' 'KernelStack: 6500 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.140 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.141 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970500 kB' 'MemAvailable: 9487432 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 452948 kB' 'Inactive: 2392952 kB' 'Active(anon): 132048 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123124 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157368 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77332 kB' 'KernelStack: 6496 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.142 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.405 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.406 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970500 kB' 'MemAvailable: 9487432 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 452676 kB' 'Inactive: 2392952 kB' 'Active(anon): 131776 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123112 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157368 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77332 kB' 'KernelStack: 6496 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.407 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.408 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:27.409 nr_hugepages=1024 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:27.409 resv_hugepages=0 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:27.409 surplus_hugepages=0 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:27.409 anon_hugepages=0 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970500 kB' 'MemAvailable: 9487432 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 452904 kB' 'Inactive: 2392952 kB' 'Active(anon): 132004 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123080 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157368 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77332 kB' 'KernelStack: 6496 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.409 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.410 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.411 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970796 kB' 'MemUsed: 5271176 kB' 'SwapCached: 0 kB' 'Active: 453012 kB' 'Inactive: 2392952 kB' 'Active(anon): 132112 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2724320 kB' 'Mapped: 48868 kB' 'AnonPages: 123184 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80036 kB' 'Slab: 157364 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.412 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:27.413 node0=1024 expecting 1024 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:27.413 00:05:27.413 real 0m0.556s 00:05:27.413 user 0m0.293s 00:05:27.413 sys 0m0.296s 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.413 05:52:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:27.413 ************************************ 00:05:27.413 END TEST even_2G_alloc 00:05:27.413 ************************************ 00:05:27.413 05:52:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:27.413 05:52:19 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:27.413 05:52:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.413 05:52:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.413 05:52:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:27.413 ************************************ 00:05:27.413 START TEST odd_alloc 00:05:27.413 ************************************ 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.413 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.937 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.937 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970192 kB' 'MemAvailable: 9487124 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 453160 kB' 'Inactive: 2392952 kB' 'Active(anon): 132260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123364 kB' 'Mapped: 49048 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157384 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77348 kB' 'KernelStack: 6500 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.937 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.938 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970192 kB' 'MemAvailable: 9487124 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 452868 kB' 'Inactive: 2392952 kB' 'Active(anon): 131968 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123084 kB' 'Mapped: 48872 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157384 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77348 kB' 'KernelStack: 6496 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.939 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.940 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970512 kB' 'MemAvailable: 9487444 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 452888 kB' 'Inactive: 2392952 kB' 'Active(anon): 131988 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123096 kB' 'Mapped: 48872 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157384 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77348 kB' 'KernelStack: 6496 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.941 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:27.943 nr_hugepages=1025 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:27.943 resv_hugepages=0 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:27.943 surplus_hugepages=0 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:27.943 anon_hugepages=0 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970900 kB' 'MemAvailable: 9487832 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 452724 kB' 'Inactive: 2392952 kB' 'Active(anon): 131824 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122956 kB' 'Mapped: 48872 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157380 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77344 kB' 'KernelStack: 6512 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970900 kB' 'MemUsed: 5271072 kB' 'SwapCached: 0 kB' 'Active: 452928 kB' 'Inactive: 2392952 kB' 'Active(anon): 132028 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2724320 kB' 'Mapped: 48872 kB' 'AnonPages: 123160 kB' 'Shmem: 10468 kB' 'KernelStack: 6496 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80036 kB' 'Slab: 157380 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:27.947 node0=1025 expecting 1025 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:27.947 00:05:27.947 real 0m0.548s 00:05:27.947 user 0m0.282s 00:05:27.947 sys 0m0.304s 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.947 05:52:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:27.947 ************************************ 00:05:27.947 END TEST odd_alloc 00:05:27.947 ************************************ 00:05:27.947 05:52:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:27.947 05:52:19 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:27.947 05:52:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.947 05:52:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.947 05:52:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:27.947 ************************************ 00:05:27.947 START TEST custom_alloc 00:05:27.947 ************************************ 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.947 05:52:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:28.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.519 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:28.519 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.519 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8016472 kB' 'MemAvailable: 10533404 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 453860 kB' 'Inactive: 2392952 kB' 'Active(anon): 132960 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 124140 kB' 'Mapped: 49068 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157344 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77308 kB' 'KernelStack: 6612 kB' 'PageTables: 4580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.520 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8016472 kB' 'MemAvailable: 10533404 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 453120 kB' 'Inactive: 2392952 kB' 'Active(anon): 132220 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123144 kB' 'Mapped: 49008 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157344 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77308 kB' 'KernelStack: 6484 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.521 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.522 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.523 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8016472 kB' 'MemAvailable: 10533404 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 453056 kB' 'Inactive: 2392952 kB' 'Active(anon): 132156 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123236 kB' 'Mapped: 48872 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157344 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77308 kB' 'KernelStack: 6496 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.524 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.525 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:28.526 nr_hugepages=512 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:28.526 resv_hugepages=0 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:28.526 surplus_hugepages=0 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:28.526 anon_hugepages=0 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8016472 kB' 'MemAvailable: 10533404 kB' 'Buffers: 2436 kB' 'Cached: 2721884 kB' 'SwapCached: 0 kB' 'Active: 452904 kB' 'Inactive: 2392952 kB' 'Active(anon): 132004 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123128 kB' 'Mapped: 48872 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157340 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77304 kB' 'KernelStack: 6464 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.526 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.527 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8016472 kB' 'MemUsed: 4225500 kB' 'SwapCached: 0 kB' 'Active: 452912 kB' 'Inactive: 2392952 kB' 'Active(anon): 132012 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2724320 kB' 'Mapped: 48872 kB' 'AnonPages: 123136 kB' 'Shmem: 10468 kB' 'KernelStack: 6532 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80036 kB' 'Slab: 157340 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.528 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:28.529 node0=512 expecting 512 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:28.529 00:05:28.529 real 0m0.555s 00:05:28.529 user 0m0.283s 00:05:28.529 sys 0m0.310s 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.529 05:52:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:28.529 ************************************ 00:05:28.529 END TEST custom_alloc 00:05:28.529 ************************************ 00:05:28.529 05:52:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:28.529 05:52:20 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:28.529 05:52:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.529 05:52:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.529 05:52:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:28.529 ************************************ 00:05:28.529 START TEST no_shrink_alloc 00:05:28.529 ************************************ 00:05:28.529 05:52:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:28.529 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:28.529 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:28.529 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:28.529 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:28.529 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:28.529 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:28.529 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:28.529 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:28.529 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:28.529 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:28.530 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:28.530 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:28.530 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:28.530 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:28.530 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:28.530 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:28.530 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:28.530 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:28.530 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:28.530 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:28.530 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.530 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.100 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.100 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.100 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970888 kB' 'MemAvailable: 9487824 kB' 'Buffers: 2436 kB' 'Cached: 2721888 kB' 'SwapCached: 0 kB' 'Active: 453436 kB' 'Inactive: 2392956 kB' 'Active(anon): 132536 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392956 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123660 kB' 'Mapped: 49004 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157344 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77308 kB' 'KernelStack: 6484 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.101 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.102 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970888 kB' 'MemAvailable: 9487824 kB' 'Buffers: 2436 kB' 'Cached: 2721888 kB' 'SwapCached: 0 kB' 'Active: 452844 kB' 'Inactive: 2392956 kB' 'Active(anon): 131944 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392956 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123352 kB' 'Mapped: 49004 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157344 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77308 kB' 'KernelStack: 6452 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.103 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.104 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970888 kB' 'MemAvailable: 9487824 kB' 'Buffers: 2436 kB' 'Cached: 2721888 kB' 'SwapCached: 0 kB' 'Active: 452656 kB' 'Inactive: 2392956 kB' 'Active(anon): 131756 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392956 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123128 kB' 'Mapped: 48872 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157344 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77308 kB' 'KernelStack: 6496 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.105 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.106 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:29.107 nr_hugepages=1024 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:29.107 resv_hugepages=0 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:29.107 surplus_hugepages=0 00:05:29.107 anon_hugepages=0 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970888 kB' 'MemAvailable: 9487824 kB' 'Buffers: 2436 kB' 'Cached: 2721888 kB' 'SwapCached: 0 kB' 'Active: 452640 kB' 'Inactive: 2392956 kB' 'Active(anon): 131740 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392956 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123112 kB' 'Mapped: 48872 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157344 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77308 kB' 'KernelStack: 6480 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.107 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.108 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6970888 kB' 'MemUsed: 5271084 kB' 'SwapCached: 0 kB' 'Active: 452708 kB' 'Inactive: 2392956 kB' 'Active(anon): 131808 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392956 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 2724324 kB' 'Mapped: 48872 kB' 'AnonPages: 123216 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80036 kB' 'Slab: 157348 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.109 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.110 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:29.111 node0=1024 expecting 1024 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.111 05:52:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.369 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.632 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.632 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.632 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.632 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6966484 kB' 'MemAvailable: 9483420 kB' 'Buffers: 2436 kB' 'Cached: 2721888 kB' 'SwapCached: 0 kB' 'Active: 453264 kB' 'Inactive: 2392956 kB' 'Active(anon): 132364 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392956 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123516 kB' 'Mapped: 48964 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157316 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77280 kB' 'KernelStack: 6500 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.633 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6966484 kB' 'MemAvailable: 9483420 kB' 'Buffers: 2436 kB' 'Cached: 2721888 kB' 'SwapCached: 0 kB' 'Active: 453260 kB' 'Inactive: 2392956 kB' 'Active(anon): 132360 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392956 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 123476 kB' 'Mapped: 48872 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157328 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77292 kB' 'KernelStack: 6528 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.634 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.635 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6966484 kB' 'MemAvailable: 9483420 kB' 'Buffers: 2436 kB' 'Cached: 2721888 kB' 'SwapCached: 0 kB' 'Active: 452996 kB' 'Inactive: 2392956 kB' 'Active(anon): 132096 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392956 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 123240 kB' 'Mapped: 48872 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157328 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77292 kB' 'KernelStack: 6512 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.636 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.637 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.638 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:29.639 nr_hugepages=1024 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:29.639 resv_hugepages=0 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:29.639 surplus_hugepages=0 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:29.639 anon_hugepages=0 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6967236 kB' 'MemAvailable: 9484172 kB' 'Buffers: 2436 kB' 'Cached: 2721888 kB' 'SwapCached: 0 kB' 'Active: 448364 kB' 'Inactive: 2392956 kB' 'Active(anon): 127464 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392956 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118536 kB' 'Mapped: 48132 kB' 'Shmem: 10468 kB' 'KReclaimable: 80036 kB' 'Slab: 157292 kB' 'SReclaimable: 80036 kB' 'SUnreclaim: 77256 kB' 'KernelStack: 6400 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.639 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.640 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6968236 kB' 'MemUsed: 5273736 kB' 'SwapCached: 0 kB' 'Active: 448248 kB' 'Inactive: 2392956 kB' 'Active(anon): 127348 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392956 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 2724324 kB' 'Mapped: 48392 kB' 'AnonPages: 118456 kB' 'Shmem: 10468 kB' 'KernelStack: 6416 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80032 kB' 'Slab: 157284 kB' 'SReclaimable: 80032 kB' 'SUnreclaim: 77252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.641 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:29.642 node0=1024 expecting 1024 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:29.642 00:05:29.642 real 0m1.044s 00:05:29.642 user 0m0.530s 00:05:29.642 sys 0m0.584s 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.642 05:52:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:29.643 ************************************ 00:05:29.643 END TEST no_shrink_alloc 00:05:29.643 ************************************ 00:05:29.643 05:52:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:29.643 05:52:21 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:29.643 05:52:21 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:29.643 05:52:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:29.643 05:52:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:29.643 05:52:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:29.643 05:52:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:29.643 05:52:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:29.643 05:52:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:29.643 05:52:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:29.643 00:05:29.643 real 0m4.710s 00:05:29.643 user 0m2.312s 00:05:29.643 sys 0m2.533s 00:05:29.643 05:52:21 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.643 ************************************ 00:05:29.643 05:52:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:29.643 END TEST hugepages 00:05:29.643 ************************************ 00:05:29.901 05:52:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:29.901 05:52:21 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:29.901 05:52:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.901 05:52:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.901 05:52:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:29.901 ************************************ 00:05:29.901 START TEST driver 00:05:29.901 ************************************ 00:05:29.901 05:52:21 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:29.901 * Looking for test storage... 00:05:29.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:29.901 05:52:21 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:29.901 05:52:21 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.901 05:52:21 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.467 05:52:22 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:30.467 05:52:22 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.467 05:52:22 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.467 05:52:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:30.467 ************************************ 00:05:30.467 START TEST guess_driver 00:05:30.467 ************************************ 00:05:30.467 05:52:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:30.467 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:30.467 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:30.467 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:30.467 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:30.467 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:30.468 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:30.468 Looking for driver=uio_pci_generic 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.468 05:52:22 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:31.036 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:31.036 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:31.036 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:31.296 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:31.296 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:31.296 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:31.296 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:31.296 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:31.296 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:31.296 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:31.296 05:52:22 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:31.296 05:52:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:31.296 05:52:22 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.865 00:05:31.865 real 0m1.439s 00:05:31.865 user 0m0.582s 00:05:31.865 sys 0m0.868s 00:05:31.865 05:52:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.865 05:52:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:31.865 ************************************ 00:05:31.865 END TEST guess_driver 00:05:31.865 ************************************ 00:05:31.865 05:52:23 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:31.865 00:05:31.865 real 0m2.152s 00:05:31.865 user 0m0.822s 00:05:31.865 sys 0m1.394s 00:05:31.865 05:52:23 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.865 05:52:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:31.865 ************************************ 00:05:31.865 END TEST driver 00:05:31.865 ************************************ 00:05:31.865 05:52:23 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:31.865 05:52:23 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:31.865 05:52:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.865 05:52:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.865 05:52:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:31.865 ************************************ 00:05:31.865 START TEST devices 00:05:31.865 ************************************ 00:05:31.865 05:52:23 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:32.123 * Looking for test storage... 00:05:32.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:32.123 05:52:23 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:32.123 05:52:23 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:32.123 05:52:23 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:32.123 05:52:23 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.689 05:52:24 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:32.689 05:52:24 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:32.689 05:52:24 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:32.690 05:52:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:32.690 05:52:24 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:32.690 05:52:24 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:32.690 05:52:24 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:32.690 05:52:24 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:32.690 05:52:24 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:32.690 05:52:24 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:32.690 05:52:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:32.690 05:52:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:32.690 05:52:24 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:32.690 05:52:24 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:32.690 05:52:24 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:32.690 05:52:24 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:32.690 05:52:24 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:32.948 No valid GPT data, bailing 00:05:32.948 05:52:24 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:32.948 05:52:24 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:32.948 05:52:24 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:32.948 05:52:24 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:32.948 05:52:24 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:32.948 05:52:24 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:32.948 05:52:24 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:32.948 05:52:24 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:32.948 05:52:24 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:32.948 05:52:24 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:32.949 05:52:24 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:32.949 05:52:24 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:32.949 No valid GPT data, bailing 00:05:32.949 05:52:24 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:32.949 05:52:24 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:32.949 05:52:24 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:32.949 05:52:24 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:32.949 05:52:24 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:32.949 05:52:24 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:32.949 05:52:24 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:32.949 05:52:24 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:32.949 No valid GPT data, bailing 00:05:32.949 05:52:24 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:32.949 05:52:24 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:32.949 05:52:24 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:32.949 05:52:24 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:32.949 05:52:24 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:32.949 05:52:24 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:32.949 05:52:24 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:32.949 05:52:24 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:32.949 05:52:24 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:33.208 No valid GPT data, bailing 00:05:33.208 05:52:24 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:33.208 05:52:24 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:33.208 05:52:24 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:33.208 05:52:24 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:33.208 05:52:24 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:33.208 05:52:24 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:33.208 05:52:24 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:33.208 05:52:24 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:33.208 05:52:24 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:33.208 05:52:24 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:33.208 05:52:24 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:33.208 05:52:24 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:33.208 05:52:24 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:33.208 05:52:24 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.208 05:52:24 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.208 05:52:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:33.208 ************************************ 00:05:33.208 START TEST nvme_mount 00:05:33.208 ************************************ 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:33.208 05:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:34.147 Creating new GPT entries in memory. 00:05:34.147 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:34.147 other utilities. 00:05:34.147 05:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:34.147 05:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:34.147 05:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:34.147 05:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:34.147 05:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:35.086 Creating new GPT entries in memory. 00:05:35.086 The operation has completed successfully. 00:05:35.086 05:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:35.086 05:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:35.086 05:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 69112 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.345 05:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:35.345 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:35.345 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:35.345 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:35.345 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.345 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:35.345 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.604 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:35.604 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.604 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:35.604 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.604 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.604 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:35.604 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.604 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:35.604 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:35.864 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:35.864 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.864 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.864 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.864 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:35.864 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:35.864 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.864 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:36.123 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:36.123 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:36.123 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:36.123 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.123 05:52:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:36.382 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:36.382 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:36.382 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:36.382 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.382 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:36.382 05:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.382 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:36.382 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.382 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:36.382 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.641 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:36.641 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:36.641 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.641 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.642 05:52:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:36.901 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:36.901 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:36.901 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:36.901 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.901 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:36.901 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.901 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:36.901 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.160 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:37.160 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.160 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.160 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:37.160 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:37.160 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:37.160 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.160 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:37.160 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:37.160 05:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:37.160 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:37.160 00:05:37.160 real 0m4.005s 00:05:37.160 user 0m0.710s 00:05:37.160 sys 0m1.041s 00:05:37.160 05:52:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.160 05:52:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:37.160 ************************************ 00:05:37.160 END TEST nvme_mount 00:05:37.160 ************************************ 00:05:37.160 05:52:28 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:37.160 05:52:28 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:37.160 05:52:28 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.160 05:52:28 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.160 05:52:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:37.160 ************************************ 00:05:37.160 START TEST dm_mount 00:05:37.160 ************************************ 00:05:37.160 05:52:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:37.160 05:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:37.160 05:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:37.160 05:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:37.160 05:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:37.161 05:52:28 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:38.126 Creating new GPT entries in memory. 00:05:38.126 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:38.126 other utilities. 00:05:38.126 05:52:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:38.126 05:52:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:38.126 05:52:29 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:38.126 05:52:29 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:38.126 05:52:29 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:39.497 Creating new GPT entries in memory. 00:05:39.497 The operation has completed successfully. 00:05:39.497 05:52:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:39.497 05:52:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:39.497 05:52:30 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:39.497 05:52:30 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:39.497 05:52:30 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:40.431 The operation has completed successfully. 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 69545 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:40.431 05:52:31 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.431 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:40.431 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:40.431 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:40.431 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.431 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:40.431 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:40.431 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:40.431 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:40.431 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:40.431 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.431 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:40.431 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:40.431 05:52:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.431 05:52:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:40.689 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:40.689 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:40.689 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:40.689 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.689 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:40.689 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.689 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:40.689 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.947 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:40.947 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.947 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:40.947 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:40.947 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.948 05:52:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:41.219 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:41.219 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:41.219 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:41.219 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.219 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:41.219 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.219 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:41.219 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.479 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:41.479 05:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.479 05:52:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:41.479 05:52:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:41.479 05:52:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:41.479 05:52:33 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:41.479 05:52:33 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:41.479 05:52:33 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:41.479 05:52:33 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:41.479 05:52:33 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:41.479 05:52:33 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:41.479 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:41.479 05:52:33 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:41.479 05:52:33 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:41.479 00:05:41.479 real 0m4.258s 00:05:41.479 user 0m0.467s 00:05:41.479 sys 0m0.733s 00:05:41.479 05:52:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.479 ************************************ 00:05:41.479 END TEST dm_mount 00:05:41.479 ************************************ 00:05:41.479 05:52:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:41.479 05:52:33 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:41.479 05:52:33 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:41.479 05:52:33 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:41.479 05:52:33 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:41.479 05:52:33 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:41.480 05:52:33 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:41.480 05:52:33 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:41.480 05:52:33 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:41.740 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:41.740 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:41.740 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:41.740 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:41.740 05:52:33 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:41.740 05:52:33 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:41.740 05:52:33 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:41.740 05:52:33 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:41.740 05:52:33 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:41.740 05:52:33 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:41.740 05:52:33 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:41.740 00:05:41.740 real 0m9.842s 00:05:41.740 user 0m1.862s 00:05:41.740 sys 0m2.374s 00:05:41.740 05:52:33 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.740 05:52:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:41.740 ************************************ 00:05:41.740 END TEST devices 00:05:41.740 ************************************ 00:05:41.740 05:52:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:41.740 ************************************ 00:05:41.740 END TEST setup.sh 00:05:41.740 ************************************ 00:05:41.740 00:05:41.740 real 0m21.767s 00:05:41.740 user 0m7.265s 00:05:41.740 sys 0m9.045s 00:05:41.740 05:52:33 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.740 05:52:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:41.999 05:52:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.999 05:52:33 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:42.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.566 Hugepages 00:05:42.566 node hugesize free / total 00:05:42.566 node0 1048576kB 0 / 0 00:05:42.566 node0 2048kB 2048 / 2048 00:05:42.566 00:05:42.566 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:42.566 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:42.824 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:42.824 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:42.824 05:52:34 -- spdk/autotest.sh@130 -- # uname -s 00:05:42.824 05:52:34 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:42.824 05:52:34 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:42.824 05:52:34 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.648 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.648 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.648 05:52:35 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:44.585 05:52:36 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:44.585 05:52:36 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:44.585 05:52:36 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:44.585 05:52:36 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:44.585 05:52:36 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:44.585 05:52:36 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:44.585 05:52:36 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.585 05:52:36 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:44.585 05:52:36 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:44.844 05:52:36 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:44.844 05:52:36 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:44.844 05:52:36 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:45.102 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.102 Waiting for block devices as requested 00:05:45.102 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:45.362 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:45.362 05:52:36 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:45.362 05:52:36 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:45.362 05:52:36 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:45.362 05:52:36 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:45.362 05:52:36 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:45.362 05:52:36 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:45.362 05:52:36 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:45.362 05:52:36 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:45.362 05:52:36 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:45.362 05:52:36 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:45.362 05:52:36 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:45.362 05:52:36 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:45.362 05:52:36 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:45.362 05:52:36 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:45.362 05:52:36 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:45.362 05:52:36 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:45.362 05:52:36 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:45.362 05:52:36 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:45.362 05:52:36 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:45.362 05:52:36 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:45.362 05:52:36 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:45.362 05:52:36 -- common/autotest_common.sh@1557 -- # continue 00:05:45.362 05:52:36 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:45.362 05:52:36 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:45.362 05:52:36 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:45.362 05:52:36 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:45.362 05:52:36 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:45.362 05:52:36 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:45.362 05:52:36 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:45.362 05:52:36 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:45.362 05:52:36 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:45.362 05:52:36 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:45.362 05:52:36 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:45.362 05:52:36 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:45.362 05:52:36 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:45.362 05:52:36 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:45.362 05:52:36 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:45.362 05:52:36 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:45.362 05:52:36 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:45.362 05:52:36 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:45.362 05:52:36 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:45.362 05:52:36 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:45.362 05:52:36 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:45.362 05:52:36 -- common/autotest_common.sh@1557 -- # continue 00:05:45.362 05:52:36 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:45.362 05:52:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.362 05:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:45.362 05:52:37 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:45.362 05:52:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.362 05:52:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.362 05:52:37 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.188 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.188 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.188 05:52:37 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:46.188 05:52:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:46.188 05:52:37 -- common/autotest_common.sh@10 -- # set +x 00:05:46.188 05:52:37 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:46.188 05:52:37 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:46.188 05:52:37 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:46.188 05:52:37 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:46.188 05:52:37 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:46.188 05:52:37 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:46.188 05:52:37 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:46.188 05:52:37 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:46.188 05:52:37 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:46.188 05:52:37 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:46.188 05:52:37 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:46.447 05:52:37 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:46.447 05:52:37 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:46.447 05:52:37 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:46.447 05:52:37 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:46.447 05:52:37 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:46.447 05:52:37 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:46.447 05:52:37 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:46.447 05:52:37 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:46.447 05:52:37 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:46.447 05:52:37 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:46.447 05:52:37 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:46.447 05:52:37 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:46.447 05:52:37 -- common/autotest_common.sh@1593 -- # return 0 00:05:46.447 05:52:37 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:46.447 05:52:37 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:46.447 05:52:37 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:46.447 05:52:37 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:46.447 05:52:37 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:46.447 05:52:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.447 05:52:37 -- common/autotest_common.sh@10 -- # set +x 00:05:46.447 05:52:37 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:05:46.447 05:52:37 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:46.447 05:52:37 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:46.447 05:52:37 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:46.447 05:52:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.447 05:52:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.447 05:52:37 -- common/autotest_common.sh@10 -- # set +x 00:05:46.447 ************************************ 00:05:46.447 START TEST env 00:05:46.447 ************************************ 00:05:46.447 05:52:37 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:46.447 * Looking for test storage... 00:05:46.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:46.447 05:52:38 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:46.447 05:52:38 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.447 05:52:38 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.447 05:52:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.447 ************************************ 00:05:46.447 START TEST env_memory 00:05:46.447 ************************************ 00:05:46.447 05:52:38 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:46.447 00:05:46.447 00:05:46.447 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.447 http://cunit.sourceforge.net/ 00:05:46.447 00:05:46.447 00:05:46.447 Suite: memory 00:05:46.447 Test: alloc and free memory map ...[2024-07-13 05:52:38.118884] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:46.447 passed 00:05:46.447 Test: mem map translation ...[2024-07-13 05:52:38.149741] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:46.447 [2024-07-13 05:52:38.149779] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:46.447 [2024-07-13 05:52:38.149835] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:46.447 [2024-07-13 05:52:38.149846] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:46.707 passed 00:05:46.707 Test: mem map registration ...[2024-07-13 05:52:38.213833] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:46.707 [2024-07-13 05:52:38.213875] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:46.707 passed 00:05:46.707 Test: mem map adjacent registrations ...passed 00:05:46.707 00:05:46.707 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.707 suites 1 1 n/a 0 0 00:05:46.707 tests 4 4 4 0 0 00:05:46.707 asserts 152 152 152 0 n/a 00:05:46.707 00:05:46.707 Elapsed time = 0.213 seconds 00:05:46.707 00:05:46.707 real 0m0.230s 00:05:46.707 user 0m0.214s 00:05:46.707 sys 0m0.013s 00:05:46.707 05:52:38 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.707 05:52:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:46.707 ************************************ 00:05:46.707 END TEST env_memory 00:05:46.707 ************************************ 00:05:46.707 05:52:38 env -- common/autotest_common.sh@1142 -- # return 0 00:05:46.707 05:52:38 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:46.707 05:52:38 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.707 05:52:38 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.707 05:52:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.707 ************************************ 00:05:46.707 START TEST env_vtophys 00:05:46.707 ************************************ 00:05:46.707 05:52:38 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:46.707 EAL: lib.eal log level changed from notice to debug 00:05:46.707 EAL: Detected lcore 0 as core 0 on socket 0 00:05:46.707 EAL: Detected lcore 1 as core 0 on socket 0 00:05:46.707 EAL: Detected lcore 2 as core 0 on socket 0 00:05:46.707 EAL: Detected lcore 3 as core 0 on socket 0 00:05:46.707 EAL: Detected lcore 4 as core 0 on socket 0 00:05:46.707 EAL: Detected lcore 5 as core 0 on socket 0 00:05:46.707 EAL: Detected lcore 6 as core 0 on socket 0 00:05:46.707 EAL: Detected lcore 7 as core 0 on socket 0 00:05:46.707 EAL: Detected lcore 8 as core 0 on socket 0 00:05:46.707 EAL: Detected lcore 9 as core 0 on socket 0 00:05:46.707 EAL: Maximum logical cores by configuration: 128 00:05:46.707 EAL: Detected CPU lcores: 10 00:05:46.707 EAL: Detected NUMA nodes: 1 00:05:46.707 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:46.707 EAL: Detected shared linkage of DPDK 00:05:46.707 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:46.707 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:46.707 EAL: Registered [vdev] bus. 00:05:46.707 EAL: bus.vdev log level changed from disabled to notice 00:05:46.707 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:46.707 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:46.707 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:46.707 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:46.707 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:46.707 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:46.707 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:46.707 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:46.707 EAL: No shared files mode enabled, IPC will be disabled 00:05:46.707 EAL: No shared files mode enabled, IPC is disabled 00:05:46.707 EAL: Selected IOVA mode 'PA' 00:05:46.707 EAL: Probing VFIO support... 00:05:46.707 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:46.707 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:46.707 EAL: Ask a virtual area of 0x2e000 bytes 00:05:46.707 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:46.707 EAL: Setting up physically contiguous memory... 00:05:46.707 EAL: Setting maximum number of open files to 524288 00:05:46.707 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:46.707 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:46.707 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.707 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:46.707 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.707 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.707 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:46.707 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:46.707 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.707 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:46.707 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.707 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.707 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:46.707 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:46.707 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.707 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:46.707 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.707 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.707 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:46.707 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:46.707 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.707 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:46.707 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.707 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.707 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:46.707 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:46.707 EAL: Hugepages will be freed exactly as allocated. 00:05:46.707 EAL: No shared files mode enabled, IPC is disabled 00:05:46.707 EAL: No shared files mode enabled, IPC is disabled 00:05:46.967 EAL: TSC frequency is ~2200000 KHz 00:05:46.967 EAL: Main lcore 0 is ready (tid=7fcf49253a00;cpuset=[0]) 00:05:46.967 EAL: Trying to obtain current memory policy. 00:05:46.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.967 EAL: Restoring previous memory policy: 0 00:05:46.967 EAL: request: mp_malloc_sync 00:05:46.967 EAL: No shared files mode enabled, IPC is disabled 00:05:46.967 EAL: Heap on socket 0 was expanded by 2MB 00:05:46.967 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:46.967 EAL: No shared files mode enabled, IPC is disabled 00:05:46.967 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:46.967 EAL: Mem event callback 'spdk:(nil)' registered 00:05:46.967 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:46.967 00:05:46.967 00:05:46.967 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.967 http://cunit.sourceforge.net/ 00:05:46.967 00:05:46.967 00:05:46.967 Suite: components_suite 00:05:46.967 Test: vtophys_malloc_test ...passed 00:05:46.967 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:46.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.967 EAL: Restoring previous memory policy: 4 00:05:46.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.967 EAL: request: mp_malloc_sync 00:05:46.967 EAL: No shared files mode enabled, IPC is disabled 00:05:46.967 EAL: Heap on socket 0 was expanded by 4MB 00:05:46.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.967 EAL: request: mp_malloc_sync 00:05:46.967 EAL: No shared files mode enabled, IPC is disabled 00:05:46.967 EAL: Heap on socket 0 was shrunk by 4MB 00:05:46.967 EAL: Trying to obtain current memory policy. 00:05:46.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.967 EAL: Restoring previous memory policy: 4 00:05:46.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.967 EAL: request: mp_malloc_sync 00:05:46.967 EAL: No shared files mode enabled, IPC is disabled 00:05:46.967 EAL: Heap on socket 0 was expanded by 6MB 00:05:46.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.967 EAL: request: mp_malloc_sync 00:05:46.967 EAL: No shared files mode enabled, IPC is disabled 00:05:46.967 EAL: Heap on socket 0 was shrunk by 6MB 00:05:46.967 EAL: Trying to obtain current memory policy. 00:05:46.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.967 EAL: Restoring previous memory policy: 4 00:05:46.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.967 EAL: request: mp_malloc_sync 00:05:46.967 EAL: No shared files mode enabled, IPC is disabled 00:05:46.967 EAL: Heap on socket 0 was expanded by 10MB 00:05:46.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.967 EAL: request: mp_malloc_sync 00:05:46.967 EAL: No shared files mode enabled, IPC is disabled 00:05:46.967 EAL: Heap on socket 0 was shrunk by 10MB 00:05:46.967 EAL: Trying to obtain current memory policy. 00:05:46.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.967 EAL: Restoring previous memory policy: 4 00:05:46.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.967 EAL: request: mp_malloc_sync 00:05:46.967 EAL: No shared files mode enabled, IPC is disabled 00:05:46.967 EAL: Heap on socket 0 was expanded by 18MB 00:05:46.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.967 EAL: request: mp_malloc_sync 00:05:46.967 EAL: No shared files mode enabled, IPC is disabled 00:05:46.967 EAL: Heap on socket 0 was shrunk by 18MB 00:05:46.967 EAL: Trying to obtain current memory policy. 00:05:46.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.967 EAL: Restoring previous memory policy: 4 00:05:46.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.967 EAL: request: mp_malloc_sync 00:05:46.967 EAL: No shared files mode enabled, IPC is disabled 00:05:46.967 EAL: Heap on socket 0 was expanded by 34MB 00:05:46.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.967 EAL: request: mp_malloc_sync 00:05:46.968 EAL: No shared files mode enabled, IPC is disabled 00:05:46.968 EAL: Heap on socket 0 was shrunk by 34MB 00:05:46.968 EAL: Trying to obtain current memory policy. 00:05:46.968 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.968 EAL: Restoring previous memory policy: 4 00:05:46.968 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.968 EAL: request: mp_malloc_sync 00:05:46.968 EAL: No shared files mode enabled, IPC is disabled 00:05:46.968 EAL: Heap on socket 0 was expanded by 66MB 00:05:46.968 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.968 EAL: request: mp_malloc_sync 00:05:46.968 EAL: No shared files mode enabled, IPC is disabled 00:05:46.968 EAL: Heap on socket 0 was shrunk by 66MB 00:05:46.968 EAL: Trying to obtain current memory policy. 00:05:46.968 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.968 EAL: Restoring previous memory policy: 4 00:05:46.968 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.968 EAL: request: mp_malloc_sync 00:05:46.968 EAL: No shared files mode enabled, IPC is disabled 00:05:46.968 EAL: Heap on socket 0 was expanded by 130MB 00:05:46.968 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.968 EAL: request: mp_malloc_sync 00:05:46.968 EAL: No shared files mode enabled, IPC is disabled 00:05:46.968 EAL: Heap on socket 0 was shrunk by 130MB 00:05:46.968 EAL: Trying to obtain current memory policy. 00:05:46.968 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.968 EAL: Restoring previous memory policy: 4 00:05:46.968 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.968 EAL: request: mp_malloc_sync 00:05:46.968 EAL: No shared files mode enabled, IPC is disabled 00:05:46.968 EAL: Heap on socket 0 was expanded by 258MB 00:05:46.968 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.968 EAL: request: mp_malloc_sync 00:05:46.968 EAL: No shared files mode enabled, IPC is disabled 00:05:46.968 EAL: Heap on socket 0 was shrunk by 258MB 00:05:46.968 EAL: Trying to obtain current memory policy. 00:05:46.968 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.227 EAL: Restoring previous memory policy: 4 00:05:47.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.227 EAL: request: mp_malloc_sync 00:05:47.227 EAL: No shared files mode enabled, IPC is disabled 00:05:47.227 EAL: Heap on socket 0 was expanded by 514MB 00:05:47.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.227 EAL: request: mp_malloc_sync 00:05:47.227 EAL: No shared files mode enabled, IPC is disabled 00:05:47.227 EAL: Heap on socket 0 was shrunk by 514MB 00:05:47.227 EAL: Trying to obtain current memory policy. 00:05:47.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.487 EAL: Restoring previous memory policy: 4 00:05:47.487 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.487 EAL: request: mp_malloc_sync 00:05:47.487 EAL: No shared files mode enabled, IPC is disabled 00:05:47.487 EAL: Heap on socket 0 was expanded by 1026MB 00:05:47.487 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.746 passed 00:05:47.746 00:05:47.746 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.746 suites 1 1 n/a 0 0 00:05:47.746 tests 2 2 2 0 0 00:05:47.746 asserts 5281 5281 5281 0 n/a 00:05:47.746 00:05:47.746 Elapsed time = 0.699 seconds 00:05:47.746 EAL: request: mp_malloc_sync 00:05:47.746 EAL: No shared files mode enabled, IPC is disabled 00:05:47.746 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:47.746 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.746 EAL: request: mp_malloc_sync 00:05:47.746 EAL: No shared files mode enabled, IPC is disabled 00:05:47.746 EAL: Heap on socket 0 was shrunk by 2MB 00:05:47.746 EAL: No shared files mode enabled, IPC is disabled 00:05:47.746 EAL: No shared files mode enabled, IPC is disabled 00:05:47.746 EAL: No shared files mode enabled, IPC is disabled 00:05:47.746 00:05:47.746 real 0m0.891s 00:05:47.746 user 0m0.461s 00:05:47.746 sys 0m0.299s 00:05:47.746 05:52:39 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.746 05:52:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:47.746 ************************************ 00:05:47.746 END TEST env_vtophys 00:05:47.746 ************************************ 00:05:47.746 05:52:39 env -- common/autotest_common.sh@1142 -- # return 0 00:05:47.747 05:52:39 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:47.747 05:52:39 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.747 05:52:39 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.747 05:52:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.747 ************************************ 00:05:47.747 START TEST env_pci 00:05:47.747 ************************************ 00:05:47.747 05:52:39 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:47.747 00:05:47.747 00:05:47.747 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.747 http://cunit.sourceforge.net/ 00:05:47.747 00:05:47.747 00:05:47.747 Suite: pci 00:05:47.747 Test: pci_hook ...[2024-07-13 05:52:39.304715] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70733 has claimed it 00:05:47.747 passed 00:05:47.747 00:05:47.747 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.747 suites 1 1 n/a 0 0 00:05:47.747 tests 1 1 1 0 0 00:05:47.747 asserts 25 25 25 0 n/a 00:05:47.747 00:05:47.747 Elapsed time = 0.002 seconds 00:05:47.747 EAL: Cannot find device (10000:00:01.0) 00:05:47.747 EAL: Failed to attach device on primary process 00:05:47.747 00:05:47.747 real 0m0.018s 00:05:47.747 user 0m0.008s 00:05:47.747 sys 0m0.009s 00:05:47.747 05:52:39 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.747 ************************************ 00:05:47.747 05:52:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:47.747 END TEST env_pci 00:05:47.747 ************************************ 00:05:47.747 05:52:39 env -- common/autotest_common.sh@1142 -- # return 0 00:05:47.747 05:52:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:47.747 05:52:39 env -- env/env.sh@15 -- # uname 00:05:47.747 05:52:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:47.747 05:52:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:47.747 05:52:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:47.747 05:52:39 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:47.747 05:52:39 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.747 05:52:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.747 ************************************ 00:05:47.747 START TEST env_dpdk_post_init 00:05:47.747 ************************************ 00:05:47.747 05:52:39 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:47.747 EAL: Detected CPU lcores: 10 00:05:47.747 EAL: Detected NUMA nodes: 1 00:05:47.747 EAL: Detected shared linkage of DPDK 00:05:47.747 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.747 EAL: Selected IOVA mode 'PA' 00:05:48.007 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:48.007 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:48.007 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:48.007 Starting DPDK initialization... 00:05:48.007 Starting SPDK post initialization... 00:05:48.007 SPDK NVMe probe 00:05:48.007 Attaching to 0000:00:10.0 00:05:48.007 Attaching to 0000:00:11.0 00:05:48.007 Attached to 0000:00:10.0 00:05:48.007 Attached to 0000:00:11.0 00:05:48.007 Cleaning up... 00:05:48.007 00:05:48.007 real 0m0.179s 00:05:48.007 user 0m0.043s 00:05:48.007 sys 0m0.036s 00:05:48.007 05:52:39 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.007 05:52:39 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.007 ************************************ 00:05:48.007 END TEST env_dpdk_post_init 00:05:48.007 ************************************ 00:05:48.007 05:52:39 env -- common/autotest_common.sh@1142 -- # return 0 00:05:48.007 05:52:39 env -- env/env.sh@26 -- # uname 00:05:48.007 05:52:39 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:48.007 05:52:39 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:48.007 05:52:39 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.007 05:52:39 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.007 05:52:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.007 ************************************ 00:05:48.007 START TEST env_mem_callbacks 00:05:48.007 ************************************ 00:05:48.007 05:52:39 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:48.007 EAL: Detected CPU lcores: 10 00:05:48.007 EAL: Detected NUMA nodes: 1 00:05:48.007 EAL: Detected shared linkage of DPDK 00:05:48.007 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:48.007 EAL: Selected IOVA mode 'PA' 00:05:48.267 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:48.267 00:05:48.267 00:05:48.267 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.267 http://cunit.sourceforge.net/ 00:05:48.267 00:05:48.267 00:05:48.267 Suite: memory 00:05:48.267 Test: test ... 00:05:48.267 register 0x200000200000 2097152 00:05:48.267 malloc 3145728 00:05:48.267 register 0x200000400000 4194304 00:05:48.267 buf 0x200000500000 len 3145728 PASSED 00:05:48.267 malloc 64 00:05:48.267 buf 0x2000004fff40 len 64 PASSED 00:05:48.267 malloc 4194304 00:05:48.267 register 0x200000800000 6291456 00:05:48.267 buf 0x200000a00000 len 4194304 PASSED 00:05:48.267 free 0x200000500000 3145728 00:05:48.267 free 0x2000004fff40 64 00:05:48.267 unregister 0x200000400000 4194304 PASSED 00:05:48.267 free 0x200000a00000 4194304 00:05:48.267 unregister 0x200000800000 6291456 PASSED 00:05:48.267 malloc 8388608 00:05:48.267 register 0x200000400000 10485760 00:05:48.267 buf 0x200000600000 len 8388608 PASSED 00:05:48.267 free 0x200000600000 8388608 00:05:48.267 unregister 0x200000400000 10485760 PASSED 00:05:48.267 passed 00:05:48.267 00:05:48.267 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.267 suites 1 1 n/a 0 0 00:05:48.267 tests 1 1 1 0 0 00:05:48.267 asserts 15 15 15 0 n/a 00:05:48.267 00:05:48.267 Elapsed time = 0.009 seconds 00:05:48.267 00:05:48.267 real 0m0.142s 00:05:48.267 user 0m0.014s 00:05:48.267 sys 0m0.027s 00:05:48.267 05:52:39 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.267 05:52:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:48.267 ************************************ 00:05:48.267 END TEST env_mem_callbacks 00:05:48.267 ************************************ 00:05:48.267 05:52:39 env -- common/autotest_common.sh@1142 -- # return 0 00:05:48.267 00:05:48.267 real 0m1.816s 00:05:48.267 user 0m0.856s 00:05:48.267 sys 0m0.595s 00:05:48.267 05:52:39 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.267 05:52:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.267 ************************************ 00:05:48.267 END TEST env 00:05:48.267 ************************************ 00:05:48.267 05:52:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:48.267 05:52:39 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:48.267 05:52:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.267 05:52:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.267 05:52:39 -- common/autotest_common.sh@10 -- # set +x 00:05:48.267 ************************************ 00:05:48.267 START TEST rpc 00:05:48.267 ************************************ 00:05:48.267 05:52:39 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:48.267 * Looking for test storage... 00:05:48.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.267 05:52:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70841 00:05:48.267 05:52:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.267 05:52:39 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:48.267 05:52:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70841 00:05:48.267 05:52:39 rpc -- common/autotest_common.sh@829 -- # '[' -z 70841 ']' 00:05:48.267 05:52:39 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.267 05:52:39 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.267 05:52:39 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.267 05:52:39 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.267 05:52:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.267 [2024-07-13 05:52:39.987410] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:48.267 [2024-07-13 05:52:39.987514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70841 ] 00:05:48.527 [2024-07-13 05:52:40.127205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.527 [2024-07-13 05:52:40.159660] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:48.527 [2024-07-13 05:52:40.159723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70841' to capture a snapshot of events at runtime. 00:05:48.527 [2024-07-13 05:52:40.159732] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:48.527 [2024-07-13 05:52:40.159739] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:48.527 [2024-07-13 05:52:40.159744] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70841 for offline analysis/debug. 00:05:48.527 [2024-07-13 05:52:40.159769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.527 [2024-07-13 05:52:40.187419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:48.786 05:52:40 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.786 05:52:40 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:48.786 05:52:40 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.786 05:52:40 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.786 05:52:40 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:48.786 05:52:40 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:48.786 05:52:40 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.786 05:52:40 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.786 05:52:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.786 ************************************ 00:05:48.786 START TEST rpc_integrity 00:05:48.786 ************************************ 00:05:48.786 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:48.786 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.786 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.787 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.787 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.787 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.787 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.787 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.787 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.787 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.787 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.787 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.787 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:48.787 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.787 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.787 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.787 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.787 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.787 { 00:05:48.787 "name": "Malloc0", 00:05:48.787 "aliases": [ 00:05:48.787 "80b7f96a-b894-4373-8781-92b6f48fdfae" 00:05:48.787 ], 00:05:48.787 "product_name": "Malloc disk", 00:05:48.787 "block_size": 512, 00:05:48.787 "num_blocks": 16384, 00:05:48.787 "uuid": "80b7f96a-b894-4373-8781-92b6f48fdfae", 00:05:48.787 "assigned_rate_limits": { 00:05:48.787 "rw_ios_per_sec": 0, 00:05:48.787 "rw_mbytes_per_sec": 0, 00:05:48.787 "r_mbytes_per_sec": 0, 00:05:48.787 "w_mbytes_per_sec": 0 00:05:48.787 }, 00:05:48.787 "claimed": false, 00:05:48.787 "zoned": false, 00:05:48.787 "supported_io_types": { 00:05:48.787 "read": true, 00:05:48.787 "write": true, 00:05:48.787 "unmap": true, 00:05:48.787 "flush": true, 00:05:48.787 "reset": true, 00:05:48.787 "nvme_admin": false, 00:05:48.787 "nvme_io": false, 00:05:48.787 "nvme_io_md": false, 00:05:48.787 "write_zeroes": true, 00:05:48.787 "zcopy": true, 00:05:48.787 "get_zone_info": false, 00:05:48.787 "zone_management": false, 00:05:48.787 "zone_append": false, 00:05:48.787 "compare": false, 00:05:48.787 "compare_and_write": false, 00:05:48.787 "abort": true, 00:05:48.787 "seek_hole": false, 00:05:48.787 "seek_data": false, 00:05:48.787 "copy": true, 00:05:48.787 "nvme_iov_md": false 00:05:48.787 }, 00:05:48.787 "memory_domains": [ 00:05:48.787 { 00:05:48.787 "dma_device_id": "system", 00:05:48.787 "dma_device_type": 1 00:05:48.787 }, 00:05:48.787 { 00:05:48.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.787 "dma_device_type": 2 00:05:48.787 } 00:05:48.787 ], 00:05:48.787 "driver_specific": {} 00:05:48.787 } 00:05:48.787 ]' 00:05:48.787 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.787 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.787 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:48.787 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.787 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.787 [2024-07-13 05:52:40.467066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:48.787 [2024-07-13 05:52:40.467121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.787 [2024-07-13 05:52:40.467141] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x147a070 00:05:48.787 [2024-07-13 05:52:40.467150] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.787 [2024-07-13 05:52:40.468591] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.787 [2024-07-13 05:52:40.468626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.787 Passthru0 00:05:48.787 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.787 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.787 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.787 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.787 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.787 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.787 { 00:05:48.787 "name": "Malloc0", 00:05:48.787 "aliases": [ 00:05:48.787 "80b7f96a-b894-4373-8781-92b6f48fdfae" 00:05:48.787 ], 00:05:48.787 "product_name": "Malloc disk", 00:05:48.787 "block_size": 512, 00:05:48.787 "num_blocks": 16384, 00:05:48.787 "uuid": "80b7f96a-b894-4373-8781-92b6f48fdfae", 00:05:48.787 "assigned_rate_limits": { 00:05:48.787 "rw_ios_per_sec": 0, 00:05:48.787 "rw_mbytes_per_sec": 0, 00:05:48.787 "r_mbytes_per_sec": 0, 00:05:48.787 "w_mbytes_per_sec": 0 00:05:48.787 }, 00:05:48.787 "claimed": true, 00:05:48.787 "claim_type": "exclusive_write", 00:05:48.787 "zoned": false, 00:05:48.787 "supported_io_types": { 00:05:48.787 "read": true, 00:05:48.787 "write": true, 00:05:48.787 "unmap": true, 00:05:48.787 "flush": true, 00:05:48.787 "reset": true, 00:05:48.787 "nvme_admin": false, 00:05:48.787 "nvme_io": false, 00:05:48.787 "nvme_io_md": false, 00:05:48.787 "write_zeroes": true, 00:05:48.787 "zcopy": true, 00:05:48.787 "get_zone_info": false, 00:05:48.787 "zone_management": false, 00:05:48.787 "zone_append": false, 00:05:48.787 "compare": false, 00:05:48.787 "compare_and_write": false, 00:05:48.787 "abort": true, 00:05:48.787 "seek_hole": false, 00:05:48.787 "seek_data": false, 00:05:48.787 "copy": true, 00:05:48.787 "nvme_iov_md": false 00:05:48.787 }, 00:05:48.787 "memory_domains": [ 00:05:48.787 { 00:05:48.787 "dma_device_id": "system", 00:05:48.787 "dma_device_type": 1 00:05:48.787 }, 00:05:48.787 { 00:05:48.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.787 "dma_device_type": 2 00:05:48.787 } 00:05:48.787 ], 00:05:48.787 "driver_specific": {} 00:05:48.787 }, 00:05:48.787 { 00:05:48.787 "name": "Passthru0", 00:05:48.787 "aliases": [ 00:05:48.787 "493fd9b0-778d-595f-b3af-f2cd3f8fc4e4" 00:05:48.787 ], 00:05:48.787 "product_name": "passthru", 00:05:48.787 "block_size": 512, 00:05:48.787 "num_blocks": 16384, 00:05:48.787 "uuid": "493fd9b0-778d-595f-b3af-f2cd3f8fc4e4", 00:05:48.787 "assigned_rate_limits": { 00:05:48.787 "rw_ios_per_sec": 0, 00:05:48.787 "rw_mbytes_per_sec": 0, 00:05:48.787 "r_mbytes_per_sec": 0, 00:05:48.787 "w_mbytes_per_sec": 0 00:05:48.787 }, 00:05:48.787 "claimed": false, 00:05:48.787 "zoned": false, 00:05:48.787 "supported_io_types": { 00:05:48.787 "read": true, 00:05:48.787 "write": true, 00:05:48.787 "unmap": true, 00:05:48.787 "flush": true, 00:05:48.787 "reset": true, 00:05:48.787 "nvme_admin": false, 00:05:48.787 "nvme_io": false, 00:05:48.787 "nvme_io_md": false, 00:05:48.787 "write_zeroes": true, 00:05:48.787 "zcopy": true, 00:05:48.787 "get_zone_info": false, 00:05:48.787 "zone_management": false, 00:05:48.787 "zone_append": false, 00:05:48.787 "compare": false, 00:05:48.787 "compare_and_write": false, 00:05:48.787 "abort": true, 00:05:48.787 "seek_hole": false, 00:05:48.787 "seek_data": false, 00:05:48.787 "copy": true, 00:05:48.787 "nvme_iov_md": false 00:05:48.787 }, 00:05:48.787 "memory_domains": [ 00:05:48.787 { 00:05:48.787 "dma_device_id": "system", 00:05:48.787 "dma_device_type": 1 00:05:48.787 }, 00:05:48.787 { 00:05:48.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.787 "dma_device_type": 2 00:05:48.787 } 00:05:48.787 ], 00:05:48.787 "driver_specific": { 00:05:48.787 "passthru": { 00:05:48.787 "name": "Passthru0", 00:05:48.787 "base_bdev_name": "Malloc0" 00:05:48.787 } 00:05:48.787 } 00:05:48.787 } 00:05:48.787 ]' 00:05:48.787 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.047 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.047 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.047 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.047 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.047 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.047 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:49.047 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.047 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.047 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.047 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.047 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.047 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.047 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.047 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.047 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.047 05:52:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.047 00:05:49.047 real 0m0.322s 00:05:49.047 user 0m0.214s 00:05:49.047 sys 0m0.039s 00:05:49.047 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.047 05:52:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.047 ************************************ 00:05:49.047 END TEST rpc_integrity 00:05:49.047 ************************************ 00:05:49.047 05:52:40 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:49.047 05:52:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:49.047 05:52:40 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.047 05:52:40 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.047 05:52:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.047 ************************************ 00:05:49.047 START TEST rpc_plugins 00:05:49.047 ************************************ 00:05:49.047 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:49.047 05:52:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:49.047 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.047 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.047 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.047 05:52:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:49.047 05:52:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:49.047 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.047 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.047 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.047 05:52:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:49.047 { 00:05:49.047 "name": "Malloc1", 00:05:49.047 "aliases": [ 00:05:49.047 "f33be8bd-750a-4ee6-87ed-1a9f864b7c0b" 00:05:49.047 ], 00:05:49.047 "product_name": "Malloc disk", 00:05:49.047 "block_size": 4096, 00:05:49.047 "num_blocks": 256, 00:05:49.047 "uuid": "f33be8bd-750a-4ee6-87ed-1a9f864b7c0b", 00:05:49.047 "assigned_rate_limits": { 00:05:49.047 "rw_ios_per_sec": 0, 00:05:49.047 "rw_mbytes_per_sec": 0, 00:05:49.047 "r_mbytes_per_sec": 0, 00:05:49.047 "w_mbytes_per_sec": 0 00:05:49.047 }, 00:05:49.047 "claimed": false, 00:05:49.047 "zoned": false, 00:05:49.047 "supported_io_types": { 00:05:49.047 "read": true, 00:05:49.047 "write": true, 00:05:49.047 "unmap": true, 00:05:49.047 "flush": true, 00:05:49.047 "reset": true, 00:05:49.047 "nvme_admin": false, 00:05:49.047 "nvme_io": false, 00:05:49.047 "nvme_io_md": false, 00:05:49.047 "write_zeroes": true, 00:05:49.047 "zcopy": true, 00:05:49.047 "get_zone_info": false, 00:05:49.047 "zone_management": false, 00:05:49.047 "zone_append": false, 00:05:49.047 "compare": false, 00:05:49.047 "compare_and_write": false, 00:05:49.047 "abort": true, 00:05:49.047 "seek_hole": false, 00:05:49.047 "seek_data": false, 00:05:49.047 "copy": true, 00:05:49.047 "nvme_iov_md": false 00:05:49.047 }, 00:05:49.047 "memory_domains": [ 00:05:49.047 { 00:05:49.047 "dma_device_id": "system", 00:05:49.047 "dma_device_type": 1 00:05:49.047 }, 00:05:49.047 { 00:05:49.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.047 "dma_device_type": 2 00:05:49.047 } 00:05:49.047 ], 00:05:49.047 "driver_specific": {} 00:05:49.047 } 00:05:49.047 ]' 00:05:49.047 05:52:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:49.047 05:52:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:49.047 05:52:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:49.047 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.047 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.307 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.307 05:52:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:49.307 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.307 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.307 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.307 05:52:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:49.307 05:52:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:49.307 05:52:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:49.307 00:05:49.307 real 0m0.156s 00:05:49.307 user 0m0.100s 00:05:49.307 sys 0m0.022s 00:05:49.307 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.307 05:52:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.307 ************************************ 00:05:49.307 END TEST rpc_plugins 00:05:49.307 ************************************ 00:05:49.307 05:52:40 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:49.307 05:52:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:49.307 05:52:40 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.307 05:52:40 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.307 05:52:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.307 ************************************ 00:05:49.307 START TEST rpc_trace_cmd_test 00:05:49.307 ************************************ 00:05:49.307 05:52:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:49.307 05:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:49.307 05:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:49.307 05:52:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.307 05:52:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.307 05:52:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.307 05:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:49.307 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70841", 00:05:49.307 "tpoint_group_mask": "0x8", 00:05:49.307 "iscsi_conn": { 00:05:49.307 "mask": "0x2", 00:05:49.307 "tpoint_mask": "0x0" 00:05:49.307 }, 00:05:49.307 "scsi": { 00:05:49.307 "mask": "0x4", 00:05:49.307 "tpoint_mask": "0x0" 00:05:49.307 }, 00:05:49.307 "bdev": { 00:05:49.307 "mask": "0x8", 00:05:49.307 "tpoint_mask": "0xffffffffffffffff" 00:05:49.307 }, 00:05:49.307 "nvmf_rdma": { 00:05:49.307 "mask": "0x10", 00:05:49.307 "tpoint_mask": "0x0" 00:05:49.307 }, 00:05:49.307 "nvmf_tcp": { 00:05:49.307 "mask": "0x20", 00:05:49.307 "tpoint_mask": "0x0" 00:05:49.307 }, 00:05:49.307 "ftl": { 00:05:49.307 "mask": "0x40", 00:05:49.307 "tpoint_mask": "0x0" 00:05:49.307 }, 00:05:49.307 "blobfs": { 00:05:49.307 "mask": "0x80", 00:05:49.307 "tpoint_mask": "0x0" 00:05:49.307 }, 00:05:49.307 "dsa": { 00:05:49.307 "mask": "0x200", 00:05:49.307 "tpoint_mask": "0x0" 00:05:49.307 }, 00:05:49.307 "thread": { 00:05:49.307 "mask": "0x400", 00:05:49.307 "tpoint_mask": "0x0" 00:05:49.307 }, 00:05:49.307 "nvme_pcie": { 00:05:49.307 "mask": "0x800", 00:05:49.307 "tpoint_mask": "0x0" 00:05:49.307 }, 00:05:49.307 "iaa": { 00:05:49.307 "mask": "0x1000", 00:05:49.307 "tpoint_mask": "0x0" 00:05:49.307 }, 00:05:49.307 "nvme_tcp": { 00:05:49.307 "mask": "0x2000", 00:05:49.307 "tpoint_mask": "0x0" 00:05:49.307 }, 00:05:49.307 "bdev_nvme": { 00:05:49.307 "mask": "0x4000", 00:05:49.307 "tpoint_mask": "0x0" 00:05:49.307 }, 00:05:49.307 "sock": { 00:05:49.307 "mask": "0x8000", 00:05:49.307 "tpoint_mask": "0x0" 00:05:49.307 } 00:05:49.307 }' 00:05:49.307 05:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:49.307 05:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:49.307 05:52:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:49.307 05:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:49.567 05:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:49.567 05:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:49.567 05:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:49.567 05:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:49.567 05:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:49.567 05:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:49.567 00:05:49.567 real 0m0.285s 00:05:49.567 user 0m0.249s 00:05:49.567 sys 0m0.023s 00:05:49.567 05:52:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.567 05:52:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.567 ************************************ 00:05:49.567 END TEST rpc_trace_cmd_test 00:05:49.567 ************************************ 00:05:49.567 05:52:41 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:49.567 05:52:41 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:49.567 05:52:41 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:49.567 05:52:41 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:49.567 05:52:41 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.567 05:52:41 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.567 05:52:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.567 ************************************ 00:05:49.567 START TEST rpc_daemon_integrity 00:05:49.567 ************************************ 00:05:49.567 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:49.567 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.567 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.567 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.567 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.567 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.567 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.826 { 00:05:49.826 "name": "Malloc2", 00:05:49.826 "aliases": [ 00:05:49.826 "b14973ac-be9f-483b-b050-5cd5e23cdebf" 00:05:49.826 ], 00:05:49.826 "product_name": "Malloc disk", 00:05:49.826 "block_size": 512, 00:05:49.826 "num_blocks": 16384, 00:05:49.826 "uuid": "b14973ac-be9f-483b-b050-5cd5e23cdebf", 00:05:49.826 "assigned_rate_limits": { 00:05:49.826 "rw_ios_per_sec": 0, 00:05:49.826 "rw_mbytes_per_sec": 0, 00:05:49.826 "r_mbytes_per_sec": 0, 00:05:49.826 "w_mbytes_per_sec": 0 00:05:49.826 }, 00:05:49.826 "claimed": false, 00:05:49.826 "zoned": false, 00:05:49.826 "supported_io_types": { 00:05:49.826 "read": true, 00:05:49.826 "write": true, 00:05:49.826 "unmap": true, 00:05:49.826 "flush": true, 00:05:49.826 "reset": true, 00:05:49.826 "nvme_admin": false, 00:05:49.826 "nvme_io": false, 00:05:49.826 "nvme_io_md": false, 00:05:49.826 "write_zeroes": true, 00:05:49.826 "zcopy": true, 00:05:49.826 "get_zone_info": false, 00:05:49.826 "zone_management": false, 00:05:49.826 "zone_append": false, 00:05:49.826 "compare": false, 00:05:49.826 "compare_and_write": false, 00:05:49.826 "abort": true, 00:05:49.826 "seek_hole": false, 00:05:49.826 "seek_data": false, 00:05:49.826 "copy": true, 00:05:49.826 "nvme_iov_md": false 00:05:49.826 }, 00:05:49.826 "memory_domains": [ 00:05:49.826 { 00:05:49.826 "dma_device_id": "system", 00:05:49.826 "dma_device_type": 1 00:05:49.826 }, 00:05:49.826 { 00:05:49.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.826 "dma_device_type": 2 00:05:49.826 } 00:05:49.826 ], 00:05:49.826 "driver_specific": {} 00:05:49.826 } 00:05:49.826 ]' 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.826 [2024-07-13 05:52:41.383441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:49.826 [2024-07-13 05:52:41.383490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.826 [2024-07-13 05:52:41.383510] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x146ba10 00:05:49.826 [2024-07-13 05:52:41.383519] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.826 [2024-07-13 05:52:41.384806] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.826 [2024-07-13 05:52:41.384849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.826 Passthru0 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.826 { 00:05:49.826 "name": "Malloc2", 00:05:49.826 "aliases": [ 00:05:49.826 "b14973ac-be9f-483b-b050-5cd5e23cdebf" 00:05:49.826 ], 00:05:49.826 "product_name": "Malloc disk", 00:05:49.826 "block_size": 512, 00:05:49.826 "num_blocks": 16384, 00:05:49.826 "uuid": "b14973ac-be9f-483b-b050-5cd5e23cdebf", 00:05:49.826 "assigned_rate_limits": { 00:05:49.826 "rw_ios_per_sec": 0, 00:05:49.826 "rw_mbytes_per_sec": 0, 00:05:49.826 "r_mbytes_per_sec": 0, 00:05:49.826 "w_mbytes_per_sec": 0 00:05:49.826 }, 00:05:49.826 "claimed": true, 00:05:49.826 "claim_type": "exclusive_write", 00:05:49.826 "zoned": false, 00:05:49.826 "supported_io_types": { 00:05:49.826 "read": true, 00:05:49.826 "write": true, 00:05:49.826 "unmap": true, 00:05:49.826 "flush": true, 00:05:49.826 "reset": true, 00:05:49.826 "nvme_admin": false, 00:05:49.826 "nvme_io": false, 00:05:49.826 "nvme_io_md": false, 00:05:49.826 "write_zeroes": true, 00:05:49.826 "zcopy": true, 00:05:49.826 "get_zone_info": false, 00:05:49.826 "zone_management": false, 00:05:49.826 "zone_append": false, 00:05:49.826 "compare": false, 00:05:49.826 "compare_and_write": false, 00:05:49.826 "abort": true, 00:05:49.826 "seek_hole": false, 00:05:49.826 "seek_data": false, 00:05:49.826 "copy": true, 00:05:49.826 "nvme_iov_md": false 00:05:49.826 }, 00:05:49.826 "memory_domains": [ 00:05:49.826 { 00:05:49.826 "dma_device_id": "system", 00:05:49.826 "dma_device_type": 1 00:05:49.826 }, 00:05:49.826 { 00:05:49.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.826 "dma_device_type": 2 00:05:49.826 } 00:05:49.826 ], 00:05:49.826 "driver_specific": {} 00:05:49.826 }, 00:05:49.826 { 00:05:49.826 "name": "Passthru0", 00:05:49.826 "aliases": [ 00:05:49.826 "2aca6163-e4ae-5433-a068-05e809f62d7b" 00:05:49.826 ], 00:05:49.826 "product_name": "passthru", 00:05:49.826 "block_size": 512, 00:05:49.826 "num_blocks": 16384, 00:05:49.826 "uuid": "2aca6163-e4ae-5433-a068-05e809f62d7b", 00:05:49.826 "assigned_rate_limits": { 00:05:49.826 "rw_ios_per_sec": 0, 00:05:49.826 "rw_mbytes_per_sec": 0, 00:05:49.826 "r_mbytes_per_sec": 0, 00:05:49.826 "w_mbytes_per_sec": 0 00:05:49.826 }, 00:05:49.826 "claimed": false, 00:05:49.826 "zoned": false, 00:05:49.826 "supported_io_types": { 00:05:49.826 "read": true, 00:05:49.826 "write": true, 00:05:49.826 "unmap": true, 00:05:49.826 "flush": true, 00:05:49.826 "reset": true, 00:05:49.826 "nvme_admin": false, 00:05:49.826 "nvme_io": false, 00:05:49.826 "nvme_io_md": false, 00:05:49.826 "write_zeroes": true, 00:05:49.826 "zcopy": true, 00:05:49.826 "get_zone_info": false, 00:05:49.826 "zone_management": false, 00:05:49.826 "zone_append": false, 00:05:49.826 "compare": false, 00:05:49.826 "compare_and_write": false, 00:05:49.826 "abort": true, 00:05:49.826 "seek_hole": false, 00:05:49.826 "seek_data": false, 00:05:49.826 "copy": true, 00:05:49.826 "nvme_iov_md": false 00:05:49.826 }, 00:05:49.826 "memory_domains": [ 00:05:49.826 { 00:05:49.826 "dma_device_id": "system", 00:05:49.826 "dma_device_type": 1 00:05:49.826 }, 00:05:49.826 { 00:05:49.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.826 "dma_device_type": 2 00:05:49.826 } 00:05:49.826 ], 00:05:49.826 "driver_specific": { 00:05:49.826 "passthru": { 00:05:49.826 "name": "Passthru0", 00:05:49.826 "base_bdev_name": "Malloc2" 00:05:49.826 } 00:05:49.826 } 00:05:49.826 } 00:05:49.826 ]' 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.826 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.827 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.827 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.827 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.827 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:49.827 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.827 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.827 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.827 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.827 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.827 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.827 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.827 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.827 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:50.096 05:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:50.096 00:05:50.096 real 0m0.319s 00:05:50.096 user 0m0.223s 00:05:50.096 sys 0m0.030s 00:05:50.096 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.096 05:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.096 ************************************ 00:05:50.096 END TEST rpc_daemon_integrity 00:05:50.096 ************************************ 00:05:50.096 05:52:41 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:50.096 05:52:41 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:50.096 05:52:41 rpc -- rpc/rpc.sh@84 -- # killprocess 70841 00:05:50.096 05:52:41 rpc -- common/autotest_common.sh@948 -- # '[' -z 70841 ']' 00:05:50.096 05:52:41 rpc -- common/autotest_common.sh@952 -- # kill -0 70841 00:05:50.096 05:52:41 rpc -- common/autotest_common.sh@953 -- # uname 00:05:50.096 05:52:41 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.096 05:52:41 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70841 00:05:50.096 05:52:41 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.096 05:52:41 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.096 killing process with pid 70841 00:05:50.096 05:52:41 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70841' 00:05:50.096 05:52:41 rpc -- common/autotest_common.sh@967 -- # kill 70841 00:05:50.096 05:52:41 rpc -- common/autotest_common.sh@972 -- # wait 70841 00:05:50.380 00:05:50.380 real 0m1.996s 00:05:50.380 user 0m2.807s 00:05:50.380 sys 0m0.470s 00:05:50.380 05:52:41 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.380 ************************************ 00:05:50.380 END TEST rpc 00:05:50.380 05:52:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.380 ************************************ 00:05:50.380 05:52:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.380 05:52:41 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:50.380 05:52:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.380 05:52:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.380 05:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:50.380 ************************************ 00:05:50.380 START TEST skip_rpc 00:05:50.380 ************************************ 00:05:50.380 05:52:41 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:50.380 * Looking for test storage... 00:05:50.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:50.380 05:52:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:50.380 05:52:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:50.380 05:52:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:50.380 05:52:41 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.380 05:52:41 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.380 05:52:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.380 ************************************ 00:05:50.380 START TEST skip_rpc 00:05:50.380 ************************************ 00:05:50.380 05:52:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:50.380 05:52:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71022 00:05:50.380 05:52:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.380 05:52:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:50.380 05:52:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:50.380 [2024-07-13 05:52:42.045114] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:50.380 [2024-07-13 05:52:42.045220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71022 ] 00:05:50.650 [2024-07-13 05:52:42.178983] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.650 [2024-07-13 05:52:42.210608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.650 [2024-07-13 05:52:42.236338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71022 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 71022 ']' 00:05:55.918 05:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 71022 00:05:55.918 05:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:55.918 05:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.918 05:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71022 00:05:55.918 05:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.918 05:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.918 killing process with pid 71022 00:05:55.918 05:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71022' 00:05:55.918 05:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 71022 00:05:55.918 05:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 71022 00:05:55.918 00:05:55.918 real 0m5.265s 00:05:55.918 user 0m5.009s 00:05:55.918 sys 0m0.162s 00:05:55.918 05:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.918 05:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.918 ************************************ 00:05:55.918 END TEST skip_rpc 00:05:55.918 ************************************ 00:05:55.918 05:52:47 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:55.918 05:52:47 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:55.918 05:52:47 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.918 05:52:47 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.918 05:52:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.918 ************************************ 00:05:55.918 START TEST skip_rpc_with_json 00:05:55.918 ************************************ 00:05:55.919 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:55.919 05:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:55.919 05:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71108 00:05:55.919 05:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.919 05:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.919 05:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71108 00:05:55.919 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 71108 ']' 00:05:55.919 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.919 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.919 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.919 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.919 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.919 [2024-07-13 05:52:47.348681] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:55.919 [2024-07-13 05:52:47.348804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71108 ] 00:05:55.919 [2024-07-13 05:52:47.477872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.919 [2024-07-13 05:52:47.510050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.919 [2024-07-13 05:52:47.536725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.178 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.178 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:56.178 05:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:56.179 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.179 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.179 [2024-07-13 05:52:47.654132] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:56.179 request: 00:05:56.179 { 00:05:56.179 "trtype": "tcp", 00:05:56.179 "method": "nvmf_get_transports", 00:05:56.179 "req_id": 1 00:05:56.179 } 00:05:56.179 Got JSON-RPC error response 00:05:56.179 response: 00:05:56.179 { 00:05:56.179 "code": -19, 00:05:56.179 "message": "No such device" 00:05:56.179 } 00:05:56.179 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:56.179 05:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:56.179 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.179 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.179 [2024-07-13 05:52:47.666229] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.179 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.179 05:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:56.179 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.179 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.179 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.179 05:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:56.179 { 00:05:56.179 "subsystems": [ 00:05:56.179 { 00:05:56.179 "subsystem": "keyring", 00:05:56.179 "config": [] 00:05:56.179 }, 00:05:56.179 { 00:05:56.179 "subsystem": "iobuf", 00:05:56.179 "config": [ 00:05:56.179 { 00:05:56.179 "method": "iobuf_set_options", 00:05:56.179 "params": { 00:05:56.179 "small_pool_count": 8192, 00:05:56.179 "large_pool_count": 1024, 00:05:56.179 "small_bufsize": 8192, 00:05:56.179 "large_bufsize": 135168 00:05:56.179 } 00:05:56.179 } 00:05:56.179 ] 00:05:56.179 }, 00:05:56.179 { 00:05:56.179 "subsystem": "sock", 00:05:56.179 "config": [ 00:05:56.179 { 00:05:56.179 "method": "sock_set_default_impl", 00:05:56.179 "params": { 00:05:56.179 "impl_name": "uring" 00:05:56.179 } 00:05:56.179 }, 00:05:56.179 { 00:05:56.179 "method": "sock_impl_set_options", 00:05:56.179 "params": { 00:05:56.179 "impl_name": "ssl", 00:05:56.179 "recv_buf_size": 4096, 00:05:56.179 "send_buf_size": 4096, 00:05:56.179 "enable_recv_pipe": true, 00:05:56.179 "enable_quickack": false, 00:05:56.179 "enable_placement_id": 0, 00:05:56.179 "enable_zerocopy_send_server": true, 00:05:56.179 "enable_zerocopy_send_client": false, 00:05:56.179 "zerocopy_threshold": 0, 00:05:56.179 "tls_version": 0, 00:05:56.179 "enable_ktls": false 00:05:56.179 } 00:05:56.179 }, 00:05:56.179 { 00:05:56.179 "method": "sock_impl_set_options", 00:05:56.179 "params": { 00:05:56.179 "impl_name": "posix", 00:05:56.179 "recv_buf_size": 2097152, 00:05:56.179 "send_buf_size": 2097152, 00:05:56.179 "enable_recv_pipe": true, 00:05:56.179 "enable_quickack": false, 00:05:56.179 "enable_placement_id": 0, 00:05:56.179 "enable_zerocopy_send_server": true, 00:05:56.179 "enable_zerocopy_send_client": false, 00:05:56.179 "zerocopy_threshold": 0, 00:05:56.179 "tls_version": 0, 00:05:56.179 "enable_ktls": false 00:05:56.179 } 00:05:56.179 }, 00:05:56.179 { 00:05:56.179 "method": "sock_impl_set_options", 00:05:56.179 "params": { 00:05:56.179 "impl_name": "uring", 00:05:56.179 "recv_buf_size": 2097152, 00:05:56.179 "send_buf_size": 2097152, 00:05:56.179 "enable_recv_pipe": true, 00:05:56.179 "enable_quickack": false, 00:05:56.179 "enable_placement_id": 0, 00:05:56.179 "enable_zerocopy_send_server": false, 00:05:56.179 "enable_zerocopy_send_client": false, 00:05:56.179 "zerocopy_threshold": 0, 00:05:56.179 "tls_version": 0, 00:05:56.179 "enable_ktls": false 00:05:56.179 } 00:05:56.179 } 00:05:56.179 ] 00:05:56.179 }, 00:05:56.179 { 00:05:56.179 "subsystem": "vmd", 00:05:56.179 "config": [] 00:05:56.179 }, 00:05:56.179 { 00:05:56.179 "subsystem": "accel", 00:05:56.179 "config": [ 00:05:56.179 { 00:05:56.179 "method": "accel_set_options", 00:05:56.179 "params": { 00:05:56.179 "small_cache_size": 128, 00:05:56.179 "large_cache_size": 16, 00:05:56.179 "task_count": 2048, 00:05:56.179 "sequence_count": 2048, 00:05:56.179 "buf_count": 2048 00:05:56.179 } 00:05:56.179 } 00:05:56.179 ] 00:05:56.179 }, 00:05:56.179 { 00:05:56.179 "subsystem": "bdev", 00:05:56.179 "config": [ 00:05:56.179 { 00:05:56.179 "method": "bdev_set_options", 00:05:56.179 "params": { 00:05:56.179 "bdev_io_pool_size": 65535, 00:05:56.179 "bdev_io_cache_size": 256, 00:05:56.179 "bdev_auto_examine": true, 00:05:56.179 "iobuf_small_cache_size": 128, 00:05:56.179 "iobuf_large_cache_size": 16 00:05:56.179 } 00:05:56.179 }, 00:05:56.179 { 00:05:56.179 "method": "bdev_raid_set_options", 00:05:56.179 "params": { 00:05:56.179 "process_window_size_kb": 1024 00:05:56.179 } 00:05:56.179 }, 00:05:56.179 { 00:05:56.179 "method": "bdev_iscsi_set_options", 00:05:56.179 "params": { 00:05:56.179 "timeout_sec": 30 00:05:56.179 } 00:05:56.179 }, 00:05:56.179 { 00:05:56.179 "method": "bdev_nvme_set_options", 00:05:56.179 "params": { 00:05:56.179 "action_on_timeout": "none", 00:05:56.179 "timeout_us": 0, 00:05:56.179 "timeout_admin_us": 0, 00:05:56.179 "keep_alive_timeout_ms": 10000, 00:05:56.179 "arbitration_burst": 0, 00:05:56.179 "low_priority_weight": 0, 00:05:56.179 "medium_priority_weight": 0, 00:05:56.179 "high_priority_weight": 0, 00:05:56.179 "nvme_adminq_poll_period_us": 10000, 00:05:56.179 "nvme_ioq_poll_period_us": 0, 00:05:56.179 "io_queue_requests": 0, 00:05:56.179 "delay_cmd_submit": true, 00:05:56.179 "transport_retry_count": 4, 00:05:56.179 "bdev_retry_count": 3, 00:05:56.179 "transport_ack_timeout": 0, 00:05:56.179 "ctrlr_loss_timeout_sec": 0, 00:05:56.179 "reconnect_delay_sec": 0, 00:05:56.179 "fast_io_fail_timeout_sec": 0, 00:05:56.179 "disable_auto_failback": false, 00:05:56.179 "generate_uuids": false, 00:05:56.180 "transport_tos": 0, 00:05:56.180 "nvme_error_stat": false, 00:05:56.180 "rdma_srq_size": 0, 00:05:56.180 "io_path_stat": false, 00:05:56.180 "allow_accel_sequence": false, 00:05:56.180 "rdma_max_cq_size": 0, 00:05:56.180 "rdma_cm_event_timeout_ms": 0, 00:05:56.180 "dhchap_digests": [ 00:05:56.180 "sha256", 00:05:56.180 "sha384", 00:05:56.180 "sha512" 00:05:56.180 ], 00:05:56.180 "dhchap_dhgroups": [ 00:05:56.180 "null", 00:05:56.180 "ffdhe2048", 00:05:56.180 "ffdhe3072", 00:05:56.180 "ffdhe4096", 00:05:56.180 "ffdhe6144", 00:05:56.180 "ffdhe8192" 00:05:56.180 ] 00:05:56.180 } 00:05:56.180 }, 00:05:56.180 { 00:05:56.180 "method": "bdev_nvme_set_hotplug", 00:05:56.180 "params": { 00:05:56.180 "period_us": 100000, 00:05:56.180 "enable": false 00:05:56.180 } 00:05:56.180 }, 00:05:56.180 { 00:05:56.180 "method": "bdev_wait_for_examine" 00:05:56.180 } 00:05:56.180 ] 00:05:56.180 }, 00:05:56.180 { 00:05:56.180 "subsystem": "scsi", 00:05:56.180 "config": null 00:05:56.180 }, 00:05:56.180 { 00:05:56.180 "subsystem": "scheduler", 00:05:56.180 "config": [ 00:05:56.180 { 00:05:56.180 "method": "framework_set_scheduler", 00:05:56.180 "params": { 00:05:56.180 "name": "static" 00:05:56.180 } 00:05:56.180 } 00:05:56.180 ] 00:05:56.180 }, 00:05:56.180 { 00:05:56.180 "subsystem": "vhost_scsi", 00:05:56.180 "config": [] 00:05:56.180 }, 00:05:56.180 { 00:05:56.180 "subsystem": "vhost_blk", 00:05:56.180 "config": [] 00:05:56.180 }, 00:05:56.180 { 00:05:56.180 "subsystem": "ublk", 00:05:56.180 "config": [] 00:05:56.180 }, 00:05:56.180 { 00:05:56.180 "subsystem": "nbd", 00:05:56.180 "config": [] 00:05:56.180 }, 00:05:56.180 { 00:05:56.180 "subsystem": "nvmf", 00:05:56.180 "config": [ 00:05:56.180 { 00:05:56.180 "method": "nvmf_set_config", 00:05:56.180 "params": { 00:05:56.180 "discovery_filter": "match_any", 00:05:56.180 "admin_cmd_passthru": { 00:05:56.180 "identify_ctrlr": false 00:05:56.180 } 00:05:56.180 } 00:05:56.180 }, 00:05:56.180 { 00:05:56.180 "method": "nvmf_set_max_subsystems", 00:05:56.180 "params": { 00:05:56.180 "max_subsystems": 1024 00:05:56.180 } 00:05:56.180 }, 00:05:56.180 { 00:05:56.180 "method": "nvmf_set_crdt", 00:05:56.180 "params": { 00:05:56.180 "crdt1": 0, 00:05:56.180 "crdt2": 0, 00:05:56.180 "crdt3": 0 00:05:56.180 } 00:05:56.180 }, 00:05:56.180 { 00:05:56.180 "method": "nvmf_create_transport", 00:05:56.180 "params": { 00:05:56.180 "trtype": "TCP", 00:05:56.180 "max_queue_depth": 128, 00:05:56.180 "max_io_qpairs_per_ctrlr": 127, 00:05:56.180 "in_capsule_data_size": 4096, 00:05:56.180 "max_io_size": 131072, 00:05:56.180 "io_unit_size": 131072, 00:05:56.180 "max_aq_depth": 128, 00:05:56.180 "num_shared_buffers": 511, 00:05:56.180 "buf_cache_size": 4294967295, 00:05:56.180 "dif_insert_or_strip": false, 00:05:56.180 "zcopy": false, 00:05:56.180 "c2h_success": true, 00:05:56.180 "sock_priority": 0, 00:05:56.180 "abort_timeout_sec": 1, 00:05:56.180 "ack_timeout": 0, 00:05:56.180 "data_wr_pool_size": 0 00:05:56.180 } 00:05:56.180 } 00:05:56.180 ] 00:05:56.180 }, 00:05:56.180 { 00:05:56.180 "subsystem": "iscsi", 00:05:56.180 "config": [ 00:05:56.180 { 00:05:56.180 "method": "iscsi_set_options", 00:05:56.180 "params": { 00:05:56.180 "node_base": "iqn.2016-06.io.spdk", 00:05:56.180 "max_sessions": 128, 00:05:56.180 "max_connections_per_session": 2, 00:05:56.180 "max_queue_depth": 64, 00:05:56.180 "default_time2wait": 2, 00:05:56.180 "default_time2retain": 20, 00:05:56.180 "first_burst_length": 8192, 00:05:56.180 "immediate_data": true, 00:05:56.180 "allow_duplicated_isid": false, 00:05:56.180 "error_recovery_level": 0, 00:05:56.180 "nop_timeout": 60, 00:05:56.180 "nop_in_interval": 30, 00:05:56.180 "disable_chap": false, 00:05:56.180 "require_chap": false, 00:05:56.180 "mutual_chap": false, 00:05:56.180 "chap_group": 0, 00:05:56.180 "max_large_datain_per_connection": 64, 00:05:56.180 "max_r2t_per_connection": 4, 00:05:56.180 "pdu_pool_size": 36864, 00:05:56.180 "immediate_data_pool_size": 16384, 00:05:56.180 "data_out_pool_size": 2048 00:05:56.180 } 00:05:56.180 } 00:05:56.180 ] 00:05:56.180 } 00:05:56.180 ] 00:05:56.180 } 00:05:56.180 05:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:56.180 05:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71108 00:05:56.180 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 71108 ']' 00:05:56.180 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 71108 00:05:56.180 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:56.180 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.180 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71108 00:05:56.180 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.180 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.180 killing process with pid 71108 00:05:56.180 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71108' 00:05:56.180 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 71108 00:05:56.180 05:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 71108 00:05:56.439 05:52:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71123 00:05:56.439 05:52:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:56.439 05:52:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71123 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 71123 ']' 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 71123 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71123 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.706 killing process with pid 71123 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71123' 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 71123 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 71123 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:01.706 00:06:01.706 real 0m6.043s 00:06:01.706 user 0m5.805s 00:06:01.706 sys 0m0.384s 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.706 ************************************ 00:06:01.706 END TEST skip_rpc_with_json 00:06:01.706 ************************************ 00:06:01.706 05:52:53 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:01.706 05:52:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:01.706 05:52:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.706 05:52:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.706 05:52:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.706 ************************************ 00:06:01.706 START TEST skip_rpc_with_delay 00:06:01.706 ************************************ 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.706 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:01.707 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.707 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.707 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.707 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.707 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.707 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.707 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.707 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.707 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:01.707 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.965 [2024-07-13 05:52:53.468006] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:01.965 [2024-07-13 05:52:53.468139] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:01.965 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:01.965 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.965 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.965 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.965 00:06:01.965 real 0m0.100s 00:06:01.965 user 0m0.064s 00:06:01.965 sys 0m0.028s 00:06:01.965 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.965 ************************************ 00:06:01.965 END TEST skip_rpc_with_delay 00:06:01.965 05:52:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:01.965 ************************************ 00:06:01.965 05:52:53 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:01.966 05:52:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:01.966 05:52:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:01.966 05:52:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:01.966 05:52:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.966 05:52:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.966 05:52:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.966 ************************************ 00:06:01.966 START TEST exit_on_failed_rpc_init 00:06:01.966 ************************************ 00:06:01.966 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:01.966 05:52:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71238 00:06:01.966 05:52:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.966 05:52:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71238 00:06:01.966 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 71238 ']' 00:06:01.966 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.966 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.966 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.966 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.966 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.966 [2024-07-13 05:52:53.619596] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:01.966 [2024-07-13 05:52:53.619699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71238 ] 00:06:02.225 [2024-07-13 05:52:53.760877] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.225 [2024-07-13 05:52:53.801050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.225 [2024-07-13 05:52:53.834142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:02.483 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.483 [2024-07-13 05:52:54.035913] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:02.483 [2024-07-13 05:52:54.036026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71243 ] 00:06:02.483 [2024-07-13 05:52:54.176774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.742 [2024-07-13 05:52:54.217876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.742 [2024-07-13 05:52:54.217993] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:02.742 [2024-07-13 05:52:54.218012] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:02.742 [2024-07-13 05:52:54.218022] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71238 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 71238 ']' 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 71238 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71238 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.742 killing process with pid 71238 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71238' 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 71238 00:06:02.742 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 71238 00:06:03.001 00:06:03.001 real 0m0.991s 00:06:03.001 user 0m1.135s 00:06:03.001 sys 0m0.278s 00:06:03.001 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.001 05:52:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.001 ************************************ 00:06:03.001 END TEST exit_on_failed_rpc_init 00:06:03.001 ************************************ 00:06:03.001 05:52:54 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:03.001 05:52:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:03.001 00:06:03.001 real 0m12.695s 00:06:03.001 user 0m12.111s 00:06:03.001 sys 0m1.031s 00:06:03.001 05:52:54 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.001 05:52:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.001 ************************************ 00:06:03.001 END TEST skip_rpc 00:06:03.001 ************************************ 00:06:03.001 05:52:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:03.001 05:52:54 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:03.001 05:52:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.001 05:52:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.001 05:52:54 -- common/autotest_common.sh@10 -- # set +x 00:06:03.001 ************************************ 00:06:03.001 START TEST rpc_client 00:06:03.001 ************************************ 00:06:03.001 05:52:54 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:03.001 * Looking for test storage... 00:06:03.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:03.001 05:52:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:03.261 OK 00:06:03.261 05:52:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:03.261 00:06:03.261 real 0m0.100s 00:06:03.261 user 0m0.047s 00:06:03.261 sys 0m0.058s 00:06:03.261 05:52:54 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.261 ************************************ 00:06:03.261 END TEST rpc_client 00:06:03.261 05:52:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:03.261 ************************************ 00:06:03.261 05:52:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:03.261 05:52:54 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:03.261 05:52:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.261 05:52:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.261 05:52:54 -- common/autotest_common.sh@10 -- # set +x 00:06:03.261 ************************************ 00:06:03.261 START TEST json_config 00:06:03.261 ************************************ 00:06:03.261 05:52:54 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.261 05:52:54 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.261 05:52:54 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.261 05:52:54 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.261 05:52:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.261 05:52:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.261 05:52:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.261 05:52:54 json_config -- paths/export.sh@5 -- # export PATH 00:06:03.261 05:52:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@47 -- # : 0 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:03.261 05:52:54 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:03.261 INFO: JSON configuration test init 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:03.261 05:52:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:03.261 05:52:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:03.261 05:52:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:03.261 05:52:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.261 05:52:54 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:03.261 05:52:54 json_config -- json_config/common.sh@9 -- # local app=target 00:06:03.261 05:52:54 json_config -- json_config/common.sh@10 -- # shift 00:06:03.261 05:52:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.261 05:52:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.261 05:52:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.261 05:52:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.261 05:52:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.261 05:52:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71361 00:06:03.261 05:52:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.261 Waiting for target to run... 00:06:03.261 05:52:54 json_config -- json_config/common.sh@25 -- # waitforlisten 71361 /var/tmp/spdk_tgt.sock 00:06:03.261 05:52:54 json_config -- common/autotest_common.sh@829 -- # '[' -z 71361 ']' 00:06:03.261 05:52:54 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:03.261 05:52:54 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.261 05:52:54 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.261 05:52:54 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.261 05:52:54 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.261 05:52:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.261 [2024-07-13 05:52:54.934879] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:03.261 [2024-07-13 05:52:54.934993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71361 ] 00:06:03.520 [2024-07-13 05:52:55.237814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.778 [2024-07-13 05:52:55.259189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.345 05:52:55 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.345 05:52:55 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:04.345 05:52:55 json_config -- json_config/common.sh@26 -- # echo '' 00:06:04.345 00:06:04.345 05:52:55 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:04.345 05:52:55 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:04.345 05:52:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:04.345 05:52:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.345 05:52:55 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:04.345 05:52:55 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:04.345 05:52:55 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.345 05:52:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.345 05:52:55 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:04.345 05:52:55 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:04.345 05:52:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:04.604 [2024-07-13 05:52:56.211305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.862 05:52:56 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:04.862 05:52:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:04.862 05:52:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:04.862 05:52:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.862 05:52:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:04.862 05:52:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:04.862 05:52:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:04.862 05:52:56 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:04.862 05:52:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:04.862 05:52:56 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:05.121 05:52:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.121 05:52:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:05.121 05:52:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.121 05:52:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:05.121 05:52:56 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.121 05:52:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.380 MallocForNvmf0 00:06:05.380 05:52:56 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.380 05:52:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.639 MallocForNvmf1 00:06:05.639 05:52:57 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:05.639 05:52:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:05.898 [2024-07-13 05:52:57.416803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.898 05:52:57 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:05.898 05:52:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:06.156 05:52:57 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:06.156 05:52:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:06.156 05:52:57 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:06.156 05:52:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:06.723 05:52:58 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:06.723 05:52:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:06.723 [2024-07-13 05:52:58.349357] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:06.723 05:52:58 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:06.723 05:52:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:06.723 05:52:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.723 05:52:58 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:06.724 05:52:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:06.724 05:52:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.724 05:52:58 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:06.724 05:52:58 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:06.724 05:52:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:07.291 MallocBdevForConfigChangeCheck 00:06:07.291 05:52:58 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:07.291 05:52:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.291 05:52:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.291 05:52:58 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:07.291 05:52:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.549 INFO: shutting down applications... 00:06:07.549 05:52:59 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:07.549 05:52:59 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:07.549 05:52:59 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:07.549 05:52:59 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:07.549 05:52:59 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:07.809 Calling clear_iscsi_subsystem 00:06:07.809 Calling clear_nvmf_subsystem 00:06:07.809 Calling clear_nbd_subsystem 00:06:07.809 Calling clear_ublk_subsystem 00:06:07.809 Calling clear_vhost_blk_subsystem 00:06:07.809 Calling clear_vhost_scsi_subsystem 00:06:07.809 Calling clear_bdev_subsystem 00:06:07.809 05:52:59 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:07.809 05:52:59 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:07.809 05:52:59 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:07.809 05:52:59 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.809 05:52:59 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:07.809 05:52:59 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:08.389 05:52:59 json_config -- json_config/json_config.sh@345 -- # break 00:06:08.389 05:52:59 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:08.389 05:52:59 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:08.389 05:52:59 json_config -- json_config/common.sh@31 -- # local app=target 00:06:08.389 05:52:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:08.389 05:52:59 json_config -- json_config/common.sh@35 -- # [[ -n 71361 ]] 00:06:08.389 05:52:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 71361 00:06:08.389 05:52:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:08.389 05:52:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.389 05:52:59 json_config -- json_config/common.sh@41 -- # kill -0 71361 00:06:08.389 05:52:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.665 05:53:00 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.665 05:53:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.665 05:53:00 json_config -- json_config/common.sh@41 -- # kill -0 71361 00:06:08.665 05:53:00 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:08.665 05:53:00 json_config -- json_config/common.sh@43 -- # break 00:06:08.665 05:53:00 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:08.665 SPDK target shutdown done 00:06:08.665 05:53:00 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:08.665 INFO: relaunching applications... 00:06:08.665 05:53:00 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:08.665 05:53:00 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:08.665 05:53:00 json_config -- json_config/common.sh@9 -- # local app=target 00:06:08.665 05:53:00 json_config -- json_config/common.sh@10 -- # shift 00:06:08.665 05:53:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.665 05:53:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.665 05:53:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.665 05:53:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.665 05:53:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.923 05:53:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71546 00:06:08.923 Waiting for target to run... 00:06:08.923 05:53:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.923 05:53:00 json_config -- json_config/common.sh@25 -- # waitforlisten 71546 /var/tmp/spdk_tgt.sock 00:06:08.923 05:53:00 json_config -- common/autotest_common.sh@829 -- # '[' -z 71546 ']' 00:06:08.923 05:53:00 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:08.923 05:53:00 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.923 05:53:00 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.923 05:53:00 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.923 05:53:00 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.923 05:53:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.923 [2024-07-13 05:53:00.481470] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:08.923 [2024-07-13 05:53:00.481597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71546 ] 00:06:09.182 [2024-07-13 05:53:00.780177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.182 [2024-07-13 05:53:00.807583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.440 [2024-07-13 05:53:00.935200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.440 [2024-07-13 05:53:01.119798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.440 [2024-07-13 05:53:01.151829] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:10.007 05:53:01 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.007 00:06:10.007 05:53:01 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:10.007 05:53:01 json_config -- json_config/common.sh@26 -- # echo '' 00:06:10.007 05:53:01 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:10.007 INFO: Checking if target configuration is the same... 00:06:10.007 05:53:01 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:10.007 05:53:01 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:10.007 05:53:01 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:10.007 05:53:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.007 + '[' 2 -ne 2 ']' 00:06:10.007 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:10.007 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:10.007 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:10.007 +++ basename /dev/fd/62 00:06:10.007 ++ mktemp /tmp/62.XXX 00:06:10.007 + tmp_file_1=/tmp/62.pBZ 00:06:10.007 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:10.007 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:10.007 + tmp_file_2=/tmp/spdk_tgt_config.json.BpI 00:06:10.007 + ret=0 00:06:10.007 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:10.266 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:10.266 + diff -u /tmp/62.pBZ /tmp/spdk_tgt_config.json.BpI 00:06:10.266 INFO: JSON config files are the same 00:06:10.266 + echo 'INFO: JSON config files are the same' 00:06:10.266 + rm /tmp/62.pBZ /tmp/spdk_tgt_config.json.BpI 00:06:10.266 + exit 0 00:06:10.266 05:53:01 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:10.266 INFO: changing configuration and checking if this can be detected... 00:06:10.266 05:53:01 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:10.266 05:53:01 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:10.266 05:53:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:10.524 05:53:02 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:10.524 05:53:02 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:10.524 05:53:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.524 + '[' 2 -ne 2 ']' 00:06:10.524 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:10.524 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:10.524 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:10.524 +++ basename /dev/fd/62 00:06:10.524 ++ mktemp /tmp/62.XXX 00:06:10.524 + tmp_file_1=/tmp/62.YEy 00:06:10.524 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:10.524 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:10.524 + tmp_file_2=/tmp/spdk_tgt_config.json.d07 00:06:10.524 + ret=0 00:06:10.524 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:10.783 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:10.783 + diff -u /tmp/62.YEy /tmp/spdk_tgt_config.json.d07 00:06:10.783 + ret=1 00:06:10.783 + echo '=== Start of file: /tmp/62.YEy ===' 00:06:10.783 + cat /tmp/62.YEy 00:06:10.783 + echo '=== End of file: /tmp/62.YEy ===' 00:06:10.783 + echo '' 00:06:10.783 + echo '=== Start of file: /tmp/spdk_tgt_config.json.d07 ===' 00:06:10.783 + cat /tmp/spdk_tgt_config.json.d07 00:06:10.783 + echo '=== End of file: /tmp/spdk_tgt_config.json.d07 ===' 00:06:10.783 + echo '' 00:06:10.783 + rm /tmp/62.YEy /tmp/spdk_tgt_config.json.d07 00:06:10.783 + exit 1 00:06:10.783 INFO: configuration change detected. 00:06:10.783 05:53:02 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:10.783 05:53:02 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:10.783 05:53:02 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:10.783 05:53:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:10.783 05:53:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.784 05:53:02 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:10.784 05:53:02 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:10.784 05:53:02 json_config -- json_config/json_config.sh@317 -- # [[ -n 71546 ]] 00:06:10.784 05:53:02 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:10.784 05:53:02 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:10.784 05:53:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:10.784 05:53:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.043 05:53:02 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:11.043 05:53:02 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:11.043 05:53:02 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:11.043 05:53:02 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:11.043 05:53:02 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:11.043 05:53:02 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:11.043 05:53:02 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:11.043 05:53:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.043 05:53:02 json_config -- json_config/json_config.sh@323 -- # killprocess 71546 00:06:11.043 05:53:02 json_config -- common/autotest_common.sh@948 -- # '[' -z 71546 ']' 00:06:11.043 05:53:02 json_config -- common/autotest_common.sh@952 -- # kill -0 71546 00:06:11.043 05:53:02 json_config -- common/autotest_common.sh@953 -- # uname 00:06:11.043 05:53:02 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.043 05:53:02 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71546 00:06:11.043 05:53:02 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.043 killing process with pid 71546 00:06:11.043 05:53:02 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.043 05:53:02 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71546' 00:06:11.043 05:53:02 json_config -- common/autotest_common.sh@967 -- # kill 71546 00:06:11.043 05:53:02 json_config -- common/autotest_common.sh@972 -- # wait 71546 00:06:11.043 05:53:02 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:11.043 05:53:02 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:11.043 05:53:02 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:11.043 05:53:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.308 INFO: Success 00:06:11.308 05:53:02 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:11.308 05:53:02 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:11.308 00:06:11.308 real 0m8.006s 00:06:11.308 user 0m11.577s 00:06:11.308 sys 0m1.449s 00:06:11.308 05:53:02 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.308 ************************************ 00:06:11.308 05:53:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.308 END TEST json_config 00:06:11.308 ************************************ 00:06:11.308 05:53:02 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.308 05:53:02 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:11.308 05:53:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.308 05:53:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.308 05:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:11.308 ************************************ 00:06:11.308 START TEST json_config_extra_key 00:06:11.308 ************************************ 00:06:11.308 05:53:02 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:11.308 05:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:11.308 05:53:02 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.308 05:53:02 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.308 05:53:02 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.308 05:53:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.308 05:53:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.308 05:53:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.308 05:53:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:11.308 05:53:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:11.308 05:53:02 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:11.308 05:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:11.308 05:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:11.308 05:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:11.308 05:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:11.308 05:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:11.308 05:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:11.308 05:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:11.308 05:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:11.308 05:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:11.308 05:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:11.308 INFO: launching applications... 00:06:11.308 05:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:11.308 05:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:11.308 05:53:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:11.308 05:53:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:11.308 05:53:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.308 05:53:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.308 05:53:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.308 05:53:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.308 05:53:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.308 05:53:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71692 00:06:11.308 05:53:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.308 Waiting for target to run... 00:06:11.308 05:53:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71692 /var/tmp/spdk_tgt.sock 00:06:11.308 05:53:02 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 71692 ']' 00:06:11.308 05:53:02 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.308 05:53:02 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:11.308 05:53:02 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.308 05:53:02 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.308 05:53:02 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.308 05:53:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.308 [2024-07-13 05:53:02.973850] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:11.308 [2024-07-13 05:53:02.973947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71692 ] 00:06:11.566 [2024-07-13 05:53:03.262363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.566 [2024-07-13 05:53:03.282213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.824 [2024-07-13 05:53:03.302652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.390 05:53:03 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.391 00:06:12.391 05:53:03 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:12.391 05:53:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:12.391 INFO: shutting down applications... 00:06:12.391 05:53:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:12.391 05:53:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:12.391 05:53:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:12.391 05:53:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:12.391 05:53:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71692 ]] 00:06:12.391 05:53:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71692 00:06:12.391 05:53:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:12.391 05:53:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.391 05:53:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71692 00:06:12.391 05:53:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.956 05:53:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.957 05:53:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.957 05:53:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71692 00:06:12.957 05:53:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:12.957 05:53:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:12.957 05:53:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:12.957 SPDK target shutdown done 00:06:12.957 05:53:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:12.957 Success 00:06:12.957 05:53:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:12.957 00:06:12.957 real 0m1.623s 00:06:12.957 user 0m1.489s 00:06:12.957 sys 0m0.299s 00:06:12.957 05:53:04 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.957 ************************************ 00:06:12.957 END TEST json_config_extra_key 00:06:12.957 05:53:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:12.957 ************************************ 00:06:12.957 05:53:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.957 05:53:04 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.957 05:53:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.957 05:53:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.957 05:53:04 -- common/autotest_common.sh@10 -- # set +x 00:06:12.957 ************************************ 00:06:12.957 START TEST alias_rpc 00:06:12.957 ************************************ 00:06:12.957 05:53:04 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.957 * Looking for test storage... 00:06:12.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:12.957 05:53:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.957 05:53:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71751 00:06:12.957 05:53:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.957 05:53:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71751 00:06:12.957 05:53:04 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 71751 ']' 00:06:12.957 05:53:04 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.957 05:53:04 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.957 05:53:04 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.957 05:53:04 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.957 05:53:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.957 [2024-07-13 05:53:04.653580] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:12.957 [2024-07-13 05:53:04.653671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71751 ] 00:06:13.215 [2024-07-13 05:53:04.789689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.215 [2024-07-13 05:53:04.828183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.215 [2024-07-13 05:53:04.859295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.149 05:53:05 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.149 05:53:05 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:14.149 05:53:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:14.149 05:53:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71751 00:06:14.149 05:53:05 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 71751 ']' 00:06:14.149 05:53:05 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 71751 00:06:14.149 05:53:05 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:14.149 05:53:05 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.149 05:53:05 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71751 00:06:14.149 05:53:05 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.149 05:53:05 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.149 05:53:05 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71751' 00:06:14.149 killing process with pid 71751 00:06:14.149 05:53:05 alias_rpc -- common/autotest_common.sh@967 -- # kill 71751 00:06:14.149 05:53:05 alias_rpc -- common/autotest_common.sh@972 -- # wait 71751 00:06:14.407 00:06:14.407 real 0m1.568s 00:06:14.407 user 0m1.841s 00:06:14.407 sys 0m0.328s 00:06:14.407 05:53:06 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.407 ************************************ 00:06:14.407 END TEST alias_rpc 00:06:14.407 ************************************ 00:06:14.407 05:53:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.407 05:53:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.407 05:53:06 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:14.407 05:53:06 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:14.407 05:53:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.407 05:53:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.407 05:53:06 -- common/autotest_common.sh@10 -- # set +x 00:06:14.664 ************************************ 00:06:14.664 START TEST spdkcli_tcp 00:06:14.664 ************************************ 00:06:14.664 05:53:06 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:14.664 * Looking for test storage... 00:06:14.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:14.664 05:53:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:14.664 05:53:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:14.664 05:53:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:14.664 05:53:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:14.664 05:53:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:14.664 05:53:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:14.664 05:53:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:14.664 05:53:06 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.664 05:53:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.664 05:53:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71827 00:06:14.664 05:53:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:14.664 05:53:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71827 00:06:14.664 05:53:06 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 71827 ']' 00:06:14.664 05:53:06 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.664 05:53:06 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.664 05:53:06 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.664 05:53:06 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.664 05:53:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.664 [2024-07-13 05:53:06.279659] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:14.664 [2024-07-13 05:53:06.279757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71827 ] 00:06:14.922 [2024-07-13 05:53:06.415152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.923 [2024-07-13 05:53:06.450318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.923 [2024-07-13 05:53:06.450325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.923 [2024-07-13 05:53:06.478286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.923 05:53:06 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.923 05:53:06 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:14.923 05:53:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71831 00:06:14.923 05:53:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:14.923 05:53:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:15.181 [ 00:06:15.181 "bdev_malloc_delete", 00:06:15.181 "bdev_malloc_create", 00:06:15.181 "bdev_null_resize", 00:06:15.181 "bdev_null_delete", 00:06:15.181 "bdev_null_create", 00:06:15.181 "bdev_nvme_cuse_unregister", 00:06:15.181 "bdev_nvme_cuse_register", 00:06:15.181 "bdev_opal_new_user", 00:06:15.181 "bdev_opal_set_lock_state", 00:06:15.181 "bdev_opal_delete", 00:06:15.181 "bdev_opal_get_info", 00:06:15.181 "bdev_opal_create", 00:06:15.181 "bdev_nvme_opal_revert", 00:06:15.181 "bdev_nvme_opal_init", 00:06:15.181 "bdev_nvme_send_cmd", 00:06:15.181 "bdev_nvme_get_path_iostat", 00:06:15.181 "bdev_nvme_get_mdns_discovery_info", 00:06:15.181 "bdev_nvme_stop_mdns_discovery", 00:06:15.181 "bdev_nvme_start_mdns_discovery", 00:06:15.181 "bdev_nvme_set_multipath_policy", 00:06:15.181 "bdev_nvme_set_preferred_path", 00:06:15.181 "bdev_nvme_get_io_paths", 00:06:15.181 "bdev_nvme_remove_error_injection", 00:06:15.181 "bdev_nvme_add_error_injection", 00:06:15.181 "bdev_nvme_get_discovery_info", 00:06:15.181 "bdev_nvme_stop_discovery", 00:06:15.181 "bdev_nvme_start_discovery", 00:06:15.181 "bdev_nvme_get_controller_health_info", 00:06:15.181 "bdev_nvme_disable_controller", 00:06:15.181 "bdev_nvme_enable_controller", 00:06:15.181 "bdev_nvme_reset_controller", 00:06:15.181 "bdev_nvme_get_transport_statistics", 00:06:15.181 "bdev_nvme_apply_firmware", 00:06:15.181 "bdev_nvme_detach_controller", 00:06:15.181 "bdev_nvme_get_controllers", 00:06:15.181 "bdev_nvme_attach_controller", 00:06:15.181 "bdev_nvme_set_hotplug", 00:06:15.181 "bdev_nvme_set_options", 00:06:15.181 "bdev_passthru_delete", 00:06:15.181 "bdev_passthru_create", 00:06:15.181 "bdev_lvol_set_parent_bdev", 00:06:15.181 "bdev_lvol_set_parent", 00:06:15.181 "bdev_lvol_check_shallow_copy", 00:06:15.181 "bdev_lvol_start_shallow_copy", 00:06:15.181 "bdev_lvol_grow_lvstore", 00:06:15.181 "bdev_lvol_get_lvols", 00:06:15.181 "bdev_lvol_get_lvstores", 00:06:15.181 "bdev_lvol_delete", 00:06:15.181 "bdev_lvol_set_read_only", 00:06:15.181 "bdev_lvol_resize", 00:06:15.181 "bdev_lvol_decouple_parent", 00:06:15.181 "bdev_lvol_inflate", 00:06:15.181 "bdev_lvol_rename", 00:06:15.181 "bdev_lvol_clone_bdev", 00:06:15.181 "bdev_lvol_clone", 00:06:15.181 "bdev_lvol_snapshot", 00:06:15.181 "bdev_lvol_create", 00:06:15.181 "bdev_lvol_delete_lvstore", 00:06:15.181 "bdev_lvol_rename_lvstore", 00:06:15.181 "bdev_lvol_create_lvstore", 00:06:15.181 "bdev_raid_set_options", 00:06:15.181 "bdev_raid_remove_base_bdev", 00:06:15.181 "bdev_raid_add_base_bdev", 00:06:15.181 "bdev_raid_delete", 00:06:15.181 "bdev_raid_create", 00:06:15.181 "bdev_raid_get_bdevs", 00:06:15.181 "bdev_error_inject_error", 00:06:15.181 "bdev_error_delete", 00:06:15.181 "bdev_error_create", 00:06:15.181 "bdev_split_delete", 00:06:15.181 "bdev_split_create", 00:06:15.181 "bdev_delay_delete", 00:06:15.181 "bdev_delay_create", 00:06:15.181 "bdev_delay_update_latency", 00:06:15.181 "bdev_zone_block_delete", 00:06:15.181 "bdev_zone_block_create", 00:06:15.181 "blobfs_create", 00:06:15.181 "blobfs_detect", 00:06:15.181 "blobfs_set_cache_size", 00:06:15.181 "bdev_aio_delete", 00:06:15.181 "bdev_aio_rescan", 00:06:15.181 "bdev_aio_create", 00:06:15.181 "bdev_ftl_set_property", 00:06:15.181 "bdev_ftl_get_properties", 00:06:15.181 "bdev_ftl_get_stats", 00:06:15.181 "bdev_ftl_unmap", 00:06:15.181 "bdev_ftl_unload", 00:06:15.181 "bdev_ftl_delete", 00:06:15.181 "bdev_ftl_load", 00:06:15.181 "bdev_ftl_create", 00:06:15.181 "bdev_virtio_attach_controller", 00:06:15.181 "bdev_virtio_scsi_get_devices", 00:06:15.181 "bdev_virtio_detach_controller", 00:06:15.181 "bdev_virtio_blk_set_hotplug", 00:06:15.181 "bdev_iscsi_delete", 00:06:15.181 "bdev_iscsi_create", 00:06:15.181 "bdev_iscsi_set_options", 00:06:15.181 "bdev_uring_delete", 00:06:15.181 "bdev_uring_rescan", 00:06:15.181 "bdev_uring_create", 00:06:15.181 "accel_error_inject_error", 00:06:15.181 "ioat_scan_accel_module", 00:06:15.181 "dsa_scan_accel_module", 00:06:15.181 "iaa_scan_accel_module", 00:06:15.181 "keyring_file_remove_key", 00:06:15.181 "keyring_file_add_key", 00:06:15.181 "keyring_linux_set_options", 00:06:15.181 "iscsi_get_histogram", 00:06:15.181 "iscsi_enable_histogram", 00:06:15.181 "iscsi_set_options", 00:06:15.181 "iscsi_get_auth_groups", 00:06:15.181 "iscsi_auth_group_remove_secret", 00:06:15.181 "iscsi_auth_group_add_secret", 00:06:15.181 "iscsi_delete_auth_group", 00:06:15.181 "iscsi_create_auth_group", 00:06:15.181 "iscsi_set_discovery_auth", 00:06:15.181 "iscsi_get_options", 00:06:15.181 "iscsi_target_node_request_logout", 00:06:15.181 "iscsi_target_node_set_redirect", 00:06:15.181 "iscsi_target_node_set_auth", 00:06:15.181 "iscsi_target_node_add_lun", 00:06:15.181 "iscsi_get_stats", 00:06:15.181 "iscsi_get_connections", 00:06:15.181 "iscsi_portal_group_set_auth", 00:06:15.181 "iscsi_start_portal_group", 00:06:15.181 "iscsi_delete_portal_group", 00:06:15.181 "iscsi_create_portal_group", 00:06:15.181 "iscsi_get_portal_groups", 00:06:15.181 "iscsi_delete_target_node", 00:06:15.181 "iscsi_target_node_remove_pg_ig_maps", 00:06:15.181 "iscsi_target_node_add_pg_ig_maps", 00:06:15.181 "iscsi_create_target_node", 00:06:15.181 "iscsi_get_target_nodes", 00:06:15.181 "iscsi_delete_initiator_group", 00:06:15.181 "iscsi_initiator_group_remove_initiators", 00:06:15.181 "iscsi_initiator_group_add_initiators", 00:06:15.181 "iscsi_create_initiator_group", 00:06:15.181 "iscsi_get_initiator_groups", 00:06:15.181 "nvmf_set_crdt", 00:06:15.181 "nvmf_set_config", 00:06:15.181 "nvmf_set_max_subsystems", 00:06:15.181 "nvmf_stop_mdns_prr", 00:06:15.181 "nvmf_publish_mdns_prr", 00:06:15.181 "nvmf_subsystem_get_listeners", 00:06:15.181 "nvmf_subsystem_get_qpairs", 00:06:15.181 "nvmf_subsystem_get_controllers", 00:06:15.181 "nvmf_get_stats", 00:06:15.181 "nvmf_get_transports", 00:06:15.181 "nvmf_create_transport", 00:06:15.181 "nvmf_get_targets", 00:06:15.181 "nvmf_delete_target", 00:06:15.181 "nvmf_create_target", 00:06:15.181 "nvmf_subsystem_allow_any_host", 00:06:15.181 "nvmf_subsystem_remove_host", 00:06:15.181 "nvmf_subsystem_add_host", 00:06:15.181 "nvmf_ns_remove_host", 00:06:15.181 "nvmf_ns_add_host", 00:06:15.181 "nvmf_subsystem_remove_ns", 00:06:15.181 "nvmf_subsystem_add_ns", 00:06:15.181 "nvmf_subsystem_listener_set_ana_state", 00:06:15.181 "nvmf_discovery_get_referrals", 00:06:15.181 "nvmf_discovery_remove_referral", 00:06:15.181 "nvmf_discovery_add_referral", 00:06:15.181 "nvmf_subsystem_remove_listener", 00:06:15.181 "nvmf_subsystem_add_listener", 00:06:15.181 "nvmf_delete_subsystem", 00:06:15.181 "nvmf_create_subsystem", 00:06:15.181 "nvmf_get_subsystems", 00:06:15.181 "env_dpdk_get_mem_stats", 00:06:15.181 "nbd_get_disks", 00:06:15.181 "nbd_stop_disk", 00:06:15.181 "nbd_start_disk", 00:06:15.181 "ublk_recover_disk", 00:06:15.181 "ublk_get_disks", 00:06:15.181 "ublk_stop_disk", 00:06:15.181 "ublk_start_disk", 00:06:15.181 "ublk_destroy_target", 00:06:15.181 "ublk_create_target", 00:06:15.181 "virtio_blk_create_transport", 00:06:15.181 "virtio_blk_get_transports", 00:06:15.181 "vhost_controller_set_coalescing", 00:06:15.181 "vhost_get_controllers", 00:06:15.181 "vhost_delete_controller", 00:06:15.181 "vhost_create_blk_controller", 00:06:15.181 "vhost_scsi_controller_remove_target", 00:06:15.181 "vhost_scsi_controller_add_target", 00:06:15.181 "vhost_start_scsi_controller", 00:06:15.181 "vhost_create_scsi_controller", 00:06:15.181 "thread_set_cpumask", 00:06:15.181 "framework_get_governor", 00:06:15.181 "framework_get_scheduler", 00:06:15.181 "framework_set_scheduler", 00:06:15.181 "framework_get_reactors", 00:06:15.181 "thread_get_io_channels", 00:06:15.181 "thread_get_pollers", 00:06:15.181 "thread_get_stats", 00:06:15.181 "framework_monitor_context_switch", 00:06:15.181 "spdk_kill_instance", 00:06:15.181 "log_enable_timestamps", 00:06:15.181 "log_get_flags", 00:06:15.181 "log_clear_flag", 00:06:15.181 "log_set_flag", 00:06:15.181 "log_get_level", 00:06:15.181 "log_set_level", 00:06:15.181 "log_get_print_level", 00:06:15.181 "log_set_print_level", 00:06:15.181 "framework_enable_cpumask_locks", 00:06:15.181 "framework_disable_cpumask_locks", 00:06:15.181 "framework_wait_init", 00:06:15.181 "framework_start_init", 00:06:15.181 "scsi_get_devices", 00:06:15.181 "bdev_get_histogram", 00:06:15.181 "bdev_enable_histogram", 00:06:15.181 "bdev_set_qos_limit", 00:06:15.181 "bdev_set_qd_sampling_period", 00:06:15.181 "bdev_get_bdevs", 00:06:15.181 "bdev_reset_iostat", 00:06:15.181 "bdev_get_iostat", 00:06:15.181 "bdev_examine", 00:06:15.181 "bdev_wait_for_examine", 00:06:15.181 "bdev_set_options", 00:06:15.181 "notify_get_notifications", 00:06:15.181 "notify_get_types", 00:06:15.181 "accel_get_stats", 00:06:15.181 "accel_set_options", 00:06:15.181 "accel_set_driver", 00:06:15.181 "accel_crypto_key_destroy", 00:06:15.181 "accel_crypto_keys_get", 00:06:15.181 "accel_crypto_key_create", 00:06:15.181 "accel_assign_opc", 00:06:15.181 "accel_get_module_info", 00:06:15.181 "accel_get_opc_assignments", 00:06:15.181 "vmd_rescan", 00:06:15.181 "vmd_remove_device", 00:06:15.181 "vmd_enable", 00:06:15.181 "sock_get_default_impl", 00:06:15.181 "sock_set_default_impl", 00:06:15.181 "sock_impl_set_options", 00:06:15.181 "sock_impl_get_options", 00:06:15.181 "iobuf_get_stats", 00:06:15.181 "iobuf_set_options", 00:06:15.181 "framework_get_pci_devices", 00:06:15.181 "framework_get_config", 00:06:15.181 "framework_get_subsystems", 00:06:15.181 "trace_get_info", 00:06:15.181 "trace_get_tpoint_group_mask", 00:06:15.181 "trace_disable_tpoint_group", 00:06:15.181 "trace_enable_tpoint_group", 00:06:15.181 "trace_clear_tpoint_mask", 00:06:15.181 "trace_set_tpoint_mask", 00:06:15.181 "keyring_get_keys", 00:06:15.181 "spdk_get_version", 00:06:15.181 "rpc_get_methods" 00:06:15.181 ] 00:06:15.181 05:53:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:15.181 05:53:06 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:15.181 05:53:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.181 05:53:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:15.181 05:53:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71827 00:06:15.181 05:53:06 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 71827 ']' 00:06:15.181 05:53:06 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 71827 00:06:15.181 05:53:06 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:15.181 05:53:06 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.181 05:53:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71827 00:06:15.438 05:53:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.438 05:53:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.438 killing process with pid 71827 00:06:15.439 05:53:06 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71827' 00:06:15.439 05:53:06 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 71827 00:06:15.439 05:53:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 71827 00:06:15.439 00:06:15.439 real 0m1.011s 00:06:15.439 user 0m1.804s 00:06:15.439 sys 0m0.318s 00:06:15.439 05:53:07 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.439 05:53:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.439 ************************************ 00:06:15.439 END TEST spdkcli_tcp 00:06:15.439 ************************************ 00:06:15.696 05:53:07 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.696 05:53:07 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.696 05:53:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.696 05:53:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.696 05:53:07 -- common/autotest_common.sh@10 -- # set +x 00:06:15.696 ************************************ 00:06:15.696 START TEST dpdk_mem_utility 00:06:15.696 ************************************ 00:06:15.696 05:53:07 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.696 * Looking for test storage... 00:06:15.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:15.696 05:53:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:15.696 05:53:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71905 00:06:15.696 05:53:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71905 00:06:15.696 05:53:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.696 05:53:07 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 71905 ']' 00:06:15.696 05:53:07 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.696 05:53:07 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.696 05:53:07 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.696 05:53:07 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.696 05:53:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:15.696 [2024-07-13 05:53:07.349142] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:15.696 [2024-07-13 05:53:07.349253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71905 ] 00:06:15.955 [2024-07-13 05:53:07.489820] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.955 [2024-07-13 05:53:07.523712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.955 [2024-07-13 05:53:07.552184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.893 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.893 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:16.893 05:53:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:16.893 05:53:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:16.893 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.893 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.893 { 00:06:16.893 "filename": "/tmp/spdk_mem_dump.txt" 00:06:16.893 } 00:06:16.893 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.893 05:53:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:16.893 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:16.893 1 heaps totaling size 814.000000 MiB 00:06:16.893 size: 814.000000 MiB heap id: 0 00:06:16.893 end heaps---------- 00:06:16.893 8 mempools totaling size 598.116089 MiB 00:06:16.893 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:16.893 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:16.893 size: 84.521057 MiB name: bdev_io_71905 00:06:16.893 size: 51.011292 MiB name: evtpool_71905 00:06:16.893 size: 50.003479 MiB name: msgpool_71905 00:06:16.893 size: 21.763794 MiB name: PDU_Pool 00:06:16.893 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:16.893 size: 0.026123 MiB name: Session_Pool 00:06:16.893 end mempools------- 00:06:16.893 6 memzones totaling size 4.142822 MiB 00:06:16.893 size: 1.000366 MiB name: RG_ring_0_71905 00:06:16.893 size: 1.000366 MiB name: RG_ring_1_71905 00:06:16.893 size: 1.000366 MiB name: RG_ring_4_71905 00:06:16.893 size: 1.000366 MiB name: RG_ring_5_71905 00:06:16.893 size: 0.125366 MiB name: RG_ring_2_71905 00:06:16.893 size: 0.015991 MiB name: RG_ring_3_71905 00:06:16.893 end memzones------- 00:06:16.893 05:53:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:16.893 heap id: 0 total size: 814.000000 MiB number of busy elements: 307 number of free elements: 15 00:06:16.893 list of free elements. size: 12.470642 MiB 00:06:16.893 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:16.893 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:16.893 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:16.893 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:16.893 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:16.893 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:16.893 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:16.893 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:16.893 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:16.893 element at address: 0x20001aa00000 with size: 0.568054 MiB 00:06:16.893 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:16.893 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:16.893 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:16.893 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:16.893 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:16.893 list of standard malloc elements. size: 199.266785 MiB 00:06:16.893 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:16.893 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:16.893 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:16.893 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:16.893 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:16.893 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:16.893 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:16.893 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:16.893 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:16.893 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:16.893 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:16.894 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:16.894 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:16.894 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:16.895 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:16.895 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:16.896 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:16.896 list of memzone associated elements. size: 602.262573 MiB 00:06:16.896 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:16.896 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:16.896 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:16.896 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:16.896 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:16.896 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_71905_0 00:06:16.896 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:16.896 associated memzone info: size: 48.002930 MiB name: MP_evtpool_71905_0 00:06:16.896 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:16.896 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71905_0 00:06:16.896 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:16.896 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:16.896 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:16.896 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:16.896 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:16.896 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_71905 00:06:16.896 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:16.896 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71905 00:06:16.896 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:16.896 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71905 00:06:16.896 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:16.896 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:16.896 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:16.896 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:16.896 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:16.896 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:16.896 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:16.896 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:16.896 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:16.896 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71905 00:06:16.896 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:16.896 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71905 00:06:16.896 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:16.896 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71905 00:06:16.896 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:16.896 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71905 00:06:16.896 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:16.896 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71905 00:06:16.896 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:16.896 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:16.896 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:16.896 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:16.896 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:16.896 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:16.896 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:16.896 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71905 00:06:16.896 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:16.896 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:16.896 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:16.896 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:16.896 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:16.896 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71905 00:06:16.896 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:16.896 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:16.896 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:16.896 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71905 00:06:16.896 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:16.896 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71905 00:06:16.896 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:16.896 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:16.896 05:53:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:16.896 05:53:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71905 00:06:16.896 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 71905 ']' 00:06:16.896 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 71905 00:06:16.896 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:16.896 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.896 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71905 00:06:16.896 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.896 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.896 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71905' 00:06:16.896 killing process with pid 71905 00:06:16.896 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 71905 00:06:16.896 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 71905 00:06:17.155 00:06:17.155 real 0m1.487s 00:06:17.155 user 0m1.729s 00:06:17.155 sys 0m0.300s 00:06:17.155 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.155 05:53:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:17.155 ************************************ 00:06:17.155 END TEST dpdk_mem_utility 00:06:17.155 ************************************ 00:06:17.155 05:53:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.155 05:53:08 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:17.155 05:53:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.155 05:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.155 05:53:08 -- common/autotest_common.sh@10 -- # set +x 00:06:17.155 ************************************ 00:06:17.155 START TEST event 00:06:17.155 ************************************ 00:06:17.155 05:53:08 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:17.155 * Looking for test storage... 00:06:17.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:17.155 05:53:08 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:17.155 05:53:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.155 05:53:08 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.155 05:53:08 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:17.155 05:53:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.155 05:53:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.155 ************************************ 00:06:17.155 START TEST event_perf 00:06:17.155 ************************************ 00:06:17.155 05:53:08 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.155 Running I/O for 1 seconds...[2024-07-13 05:53:08.852074] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:17.155 [2024-07-13 05:53:08.852183] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71977 ] 00:06:17.414 [2024-07-13 05:53:08.983898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.414 [2024-07-13 05:53:09.024055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.414 [2024-07-13 05:53:09.024153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.414 [2024-07-13 05:53:09.024296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.414 [2024-07-13 05:53:09.024300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.790 Running I/O for 1 seconds... 00:06:18.790 lcore 0: 207185 00:06:18.790 lcore 1: 207183 00:06:18.790 lcore 2: 207183 00:06:18.790 lcore 3: 207184 00:06:18.790 done. 00:06:18.790 00:06:18.790 real 0m1.249s 00:06:18.790 user 0m4.081s 00:06:18.790 sys 0m0.048s 00:06:18.790 05:53:10 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.790 ************************************ 00:06:18.790 END TEST event_perf 00:06:18.790 ************************************ 00:06:18.790 05:53:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.790 05:53:10 event -- common/autotest_common.sh@1142 -- # return 0 00:06:18.790 05:53:10 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:18.790 05:53:10 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:18.790 05:53:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.790 05:53:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.790 ************************************ 00:06:18.790 START TEST event_reactor 00:06:18.790 ************************************ 00:06:18.790 05:53:10 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:18.790 [2024-07-13 05:53:10.150132] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:18.790 [2024-07-13 05:53:10.150222] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72015 ] 00:06:18.790 [2024-07-13 05:53:10.286227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.790 [2024-07-13 05:53:10.324496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.728 test_start 00:06:19.728 oneshot 00:06:19.728 tick 100 00:06:19.728 tick 100 00:06:19.728 tick 250 00:06:19.728 tick 100 00:06:19.728 tick 100 00:06:19.728 tick 100 00:06:19.728 tick 250 00:06:19.728 tick 500 00:06:19.728 tick 100 00:06:19.728 tick 100 00:06:19.728 tick 250 00:06:19.728 tick 100 00:06:19.728 tick 100 00:06:19.728 test_end 00:06:19.728 ************************************ 00:06:19.728 END TEST event_reactor 00:06:19.728 ************************************ 00:06:19.728 00:06:19.728 real 0m1.244s 00:06:19.728 user 0m1.094s 00:06:19.728 sys 0m0.044s 00:06:19.728 05:53:11 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.728 05:53:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:19.728 05:53:11 event -- common/autotest_common.sh@1142 -- # return 0 00:06:19.728 05:53:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.728 05:53:11 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:19.728 05:53:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.728 05:53:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.728 ************************************ 00:06:19.728 START TEST event_reactor_perf 00:06:19.728 ************************************ 00:06:19.728 05:53:11 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.728 [2024-07-13 05:53:11.446335] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:19.728 [2024-07-13 05:53:11.446476] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72045 ] 00:06:19.988 [2024-07-13 05:53:11.582449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.988 [2024-07-13 05:53:11.617716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.366 test_start 00:06:21.366 test_end 00:06:21.366 Performance: 422242 events per second 00:06:21.366 00:06:21.366 real 0m1.240s 00:06:21.366 user 0m1.090s 00:06:21.366 sys 0m0.044s 00:06:21.366 05:53:12 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.366 05:53:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.366 ************************************ 00:06:21.366 END TEST event_reactor_perf 00:06:21.366 ************************************ 00:06:21.366 05:53:12 event -- common/autotest_common.sh@1142 -- # return 0 00:06:21.366 05:53:12 event -- event/event.sh@49 -- # uname -s 00:06:21.366 05:53:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:21.366 05:53:12 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:21.366 05:53:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.366 05:53:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.366 05:53:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.366 ************************************ 00:06:21.366 START TEST event_scheduler 00:06:21.366 ************************************ 00:06:21.366 05:53:12 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:21.366 * Looking for test storage... 00:06:21.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:21.366 05:53:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:21.366 05:53:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72101 00:06:21.367 05:53:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:21.367 05:53:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.367 05:53:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72101 00:06:21.367 05:53:12 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 72101 ']' 00:06:21.367 05:53:12 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.367 05:53:12 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.367 05:53:12 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.367 05:53:12 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.367 05:53:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.367 [2024-07-13 05:53:12.856533] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:21.367 [2024-07-13 05:53:12.856607] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72101 ] 00:06:21.367 [2024-07-13 05:53:12.991747] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.367 [2024-07-13 05:53:13.035530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.367 [2024-07-13 05:53:13.035578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.367 [2024-07-13 05:53:13.035702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.367 [2024-07-13 05:53:13.035710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.367 05:53:13 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.367 05:53:13 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:21.367 05:53:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:21.367 05:53:13 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.367 05:53:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.367 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.367 POWER: Cannot set governor of lcore 0 to userspace 00:06:21.367 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.367 POWER: Cannot set governor of lcore 0 to performance 00:06:21.367 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.367 POWER: Cannot set governor of lcore 0 to userspace 00:06:21.367 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:21.367 POWER: Unable to set Power Management Environment for lcore 0 00:06:21.367 [2024-07-13 05:53:13.090995] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:21.367 [2024-07-13 05:53:13.091122] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:21.367 [2024-07-13 05:53:13.091233] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:21.367 [2024-07-13 05:53:13.091384] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:21.367 [2024-07-13 05:53:13.091516] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:21.367 [2024-07-13 05:53:13.091632] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:21.629 05:53:13 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.629 05:53:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:21.629 05:53:13 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.629 05:53:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.629 [2024-07-13 05:53:13.132900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.629 [2024-07-13 05:53:13.152557] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:21.629 05:53:13 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.629 05:53:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:21.630 05:53:13 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.630 05:53:13 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.630 05:53:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.630 ************************************ 00:06:21.630 START TEST scheduler_create_thread 00:06:21.630 ************************************ 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.630 2 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.630 3 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.630 4 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.630 5 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.630 6 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:21.630 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.631 7 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.631 8 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.631 9 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.631 10 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.631 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.632 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.632 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:21.632 05:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:21.632 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.632 05:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.004 05:53:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.004 00:06:23.004 real 0m1.172s 00:06:23.004 user 0m0.015s 00:06:23.004 sys 0m0.008s 00:06:23.004 ************************************ 00:06:23.004 END TEST scheduler_create_thread 00:06:23.004 ************************************ 00:06:23.004 05:53:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.004 05:53:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.004 05:53:14 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:23.004 05:53:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:23.004 05:53:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72101 00:06:23.004 05:53:14 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 72101 ']' 00:06:23.004 05:53:14 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 72101 00:06:23.004 05:53:14 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:23.004 05:53:14 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.004 05:53:14 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72101 00:06:23.004 killing process with pid 72101 00:06:23.004 05:53:14 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:23.005 05:53:14 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:23.005 05:53:14 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72101' 00:06:23.005 05:53:14 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 72101 00:06:23.005 05:53:14 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 72101 00:06:23.286 [2024-07-13 05:53:14.818565] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:23.286 ************************************ 00:06:23.286 END TEST event_scheduler 00:06:23.286 ************************************ 00:06:23.286 00:06:23.286 real 0m2.223s 00:06:23.286 user 0m2.448s 00:06:23.286 sys 0m0.280s 00:06:23.286 05:53:14 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.286 05:53:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.286 05:53:14 event -- common/autotest_common.sh@1142 -- # return 0 00:06:23.286 05:53:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:23.544 05:53:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:23.544 05:53:14 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.544 05:53:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.544 05:53:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.544 ************************************ 00:06:23.544 START TEST app_repeat 00:06:23.544 ************************************ 00:06:23.544 05:53:15 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:23.544 05:53:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.544 05:53:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.544 05:53:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:23.544 05:53:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.544 05:53:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:23.544 05:53:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:23.544 05:53:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:23.544 05:53:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72177 00:06:23.544 05:53:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.544 05:53:15 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:23.544 Process app_repeat pid: 72177 00:06:23.544 05:53:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72177' 00:06:23.544 spdk_app_start Round 0 00:06:23.544 05:53:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.544 05:53:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:23.544 05:53:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72177 /var/tmp/spdk-nbd.sock 00:06:23.544 05:53:15 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72177 ']' 00:06:23.544 05:53:15 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.544 05:53:15 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.544 05:53:15 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.544 05:53:15 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.544 05:53:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.544 [2024-07-13 05:53:15.034267] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:23.544 [2024-07-13 05:53:15.034354] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72177 ] 00:06:23.544 [2024-07-13 05:53:15.169027] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.544 [2024-07-13 05:53:15.212748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.544 [2024-07-13 05:53:15.212763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.544 [2024-07-13 05:53:15.247347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.803 05:53:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.803 05:53:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:23.803 05:53:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.803 Malloc0 00:06:23.803 05:53:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.067 Malloc1 00:06:24.325 05:53:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.326 05:53:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.584 /dev/nbd0 00:06:24.584 05:53:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.584 05:53:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.584 05:53:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:24.584 05:53:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:24.584 05:53:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:24.584 05:53:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:24.584 05:53:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:24.584 05:53:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:24.584 05:53:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:24.584 05:53:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:24.584 05:53:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.584 1+0 records in 00:06:24.584 1+0 records out 00:06:24.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656546 s, 6.2 MB/s 00:06:24.584 05:53:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.584 05:53:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:24.584 05:53:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.584 05:53:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:24.584 05:53:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:24.584 05:53:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.584 05:53:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.584 05:53:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.843 /dev/nbd1 00:06:24.843 05:53:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.843 05:53:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.843 05:53:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:24.843 05:53:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:24.843 05:53:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:24.843 05:53:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:24.843 05:53:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:24.843 05:53:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:24.843 05:53:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:24.843 05:53:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:24.843 05:53:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.843 1+0 records in 00:06:24.843 1+0 records out 00:06:24.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538523 s, 7.6 MB/s 00:06:24.843 05:53:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.843 05:53:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:24.843 05:53:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.843 05:53:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:24.843 05:53:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:24.843 05:53:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.843 05:53:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.843 05:53:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.843 05:53:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.843 05:53:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.101 { 00:06:25.101 "nbd_device": "/dev/nbd0", 00:06:25.101 "bdev_name": "Malloc0" 00:06:25.101 }, 00:06:25.101 { 00:06:25.101 "nbd_device": "/dev/nbd1", 00:06:25.101 "bdev_name": "Malloc1" 00:06:25.101 } 00:06:25.101 ]' 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.101 { 00:06:25.101 "nbd_device": "/dev/nbd0", 00:06:25.101 "bdev_name": "Malloc0" 00:06:25.101 }, 00:06:25.101 { 00:06:25.101 "nbd_device": "/dev/nbd1", 00:06:25.101 "bdev_name": "Malloc1" 00:06:25.101 } 00:06:25.101 ]' 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.101 /dev/nbd1' 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.101 /dev/nbd1' 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.101 256+0 records in 00:06:25.101 256+0 records out 00:06:25.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00738052 s, 142 MB/s 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.101 256+0 records in 00:06:25.101 256+0 records out 00:06:25.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231757 s, 45.2 MB/s 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.101 05:53:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.360 256+0 records in 00:06:25.360 256+0 records out 00:06:25.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285899 s, 36.7 MB/s 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.360 05:53:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.618 05:53:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.876 05:53:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.876 05:53:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.876 05:53:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.876 05:53:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.876 05:53:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.876 05:53:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.876 05:53:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.134 05:53:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.134 05:53:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.134 05:53:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.134 05:53:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.134 05:53:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.134 05:53:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.392 05:53:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.392 [2024-07-13 05:53:17.994084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.392 [2024-07-13 05:53:18.031137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.392 [2024-07-13 05:53:18.031150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.392 [2024-07-13 05:53:18.061146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.392 [2024-07-13 05:53:18.061243] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.393 [2024-07-13 05:53:18.061257] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.672 spdk_app_start Round 1 00:06:29.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.672 05:53:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.672 05:53:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:29.672 05:53:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72177 /var/tmp/spdk-nbd.sock 00:06:29.672 05:53:20 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72177 ']' 00:06:29.672 05:53:20 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.672 05:53:20 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.672 05:53:20 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.672 05:53:20 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.672 05:53:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.672 05:53:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.672 05:53:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:29.672 05:53:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.930 Malloc0 00:06:29.930 05:53:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.930 Malloc1 00:06:29.930 05:53:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.930 05:53:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.188 /dev/nbd0 00:06:30.188 05:53:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.188 05:53:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.188 05:53:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:30.188 05:53:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.188 05:53:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.188 05:53:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.188 05:53:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:30.188 05:53:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.188 05:53:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.188 05:53:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.188 05:53:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.188 1+0 records in 00:06:30.188 1+0 records out 00:06:30.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348074 s, 11.8 MB/s 00:06:30.188 05:53:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.188 05:53:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.188 05:53:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.188 05:53:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.188 05:53:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.188 05:53:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.188 05:53:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.188 05:53:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.446 /dev/nbd1 00:06:30.447 05:53:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.447 05:53:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.447 05:53:22 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:30.447 05:53:22 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.447 05:53:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.447 05:53:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.447 05:53:22 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:30.447 05:53:22 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.447 05:53:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.447 05:53:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.447 05:53:22 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.447 1+0 records in 00:06:30.447 1+0 records out 00:06:30.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239828 s, 17.1 MB/s 00:06:30.447 05:53:22 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.447 05:53:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.447 05:53:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.447 05:53:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.447 05:53:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.447 05:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.447 05:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.447 05:53:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.447 05:53:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.447 05:53:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.705 05:53:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.705 { 00:06:30.705 "nbd_device": "/dev/nbd0", 00:06:30.705 "bdev_name": "Malloc0" 00:06:30.705 }, 00:06:30.705 { 00:06:30.705 "nbd_device": "/dev/nbd1", 00:06:30.705 "bdev_name": "Malloc1" 00:06:30.705 } 00:06:30.705 ]' 00:06:30.705 05:53:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.705 { 00:06:30.705 "nbd_device": "/dev/nbd0", 00:06:30.705 "bdev_name": "Malloc0" 00:06:30.705 }, 00:06:30.705 { 00:06:30.705 "nbd_device": "/dev/nbd1", 00:06:30.705 "bdev_name": "Malloc1" 00:06:30.705 } 00:06:30.705 ]' 00:06:30.705 05:53:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.963 /dev/nbd1' 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.963 /dev/nbd1' 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.963 256+0 records in 00:06:30.963 256+0 records out 00:06:30.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00833086 s, 126 MB/s 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.963 256+0 records in 00:06:30.963 256+0 records out 00:06:30.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02783 s, 37.7 MB/s 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.963 05:53:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.964 256+0 records in 00:06:30.964 256+0 records out 00:06:30.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254461 s, 41.2 MB/s 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.964 05:53:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.223 05:53:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.223 05:53:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.223 05:53:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.223 05:53:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.223 05:53:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.223 05:53:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.223 05:53:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.223 05:53:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.223 05:53:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.223 05:53:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.482 05:53:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.482 05:53:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.482 05:53:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.482 05:53:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.482 05:53:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.482 05:53:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.482 05:53:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.482 05:53:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.482 05:53:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.482 05:53:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.482 05:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.741 05:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.741 05:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.741 05:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.741 05:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.741 05:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.741 05:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.741 05:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.741 05:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.741 05:53:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.741 05:53:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.741 05:53:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.741 05:53:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.741 05:53:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.308 05:53:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.308 [2024-07-13 05:53:23.854829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.308 [2024-07-13 05:53:23.887713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.308 [2024-07-13 05:53:23.887725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.308 [2024-07-13 05:53:23.917192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.308 [2024-07-13 05:53:23.917282] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.308 [2024-07-13 05:53:23.917296] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.595 spdk_app_start Round 2 00:06:35.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.595 05:53:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:35.595 05:53:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:35.595 05:53:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72177 /var/tmp/spdk-nbd.sock 00:06:35.595 05:53:26 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72177 ']' 00:06:35.595 05:53:26 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.595 05:53:26 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.595 05:53:26 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.595 05:53:26 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.595 05:53:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.595 05:53:27 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.595 05:53:27 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:35.595 05:53:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.595 Malloc0 00:06:35.595 05:53:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.854 Malloc1 00:06:35.854 05:53:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.854 05:53:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.113 /dev/nbd0 00:06:36.113 05:53:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.113 05:53:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.113 05:53:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:36.113 05:53:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:36.113 05:53:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:36.113 05:53:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:36.113 05:53:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:36.113 05:53:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:36.113 05:53:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:36.113 05:53:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:36.113 05:53:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.113 1+0 records in 00:06:36.113 1+0 records out 00:06:36.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444736 s, 9.2 MB/s 00:06:36.113 05:53:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.113 05:53:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:36.113 05:53:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.113 05:53:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:36.113 05:53:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:36.113 05:53:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.113 05:53:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.113 05:53:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.372 /dev/nbd1 00:06:36.372 05:53:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.372 05:53:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.372 05:53:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:36.372 05:53:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:36.372 05:53:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:36.372 05:53:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:36.372 05:53:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:36.372 05:53:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:36.372 05:53:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:36.372 05:53:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:36.372 05:53:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.372 1+0 records in 00:06:36.372 1+0 records out 00:06:36.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469801 s, 8.7 MB/s 00:06:36.372 05:53:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.631 05:53:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:36.631 05:53:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.631 05:53:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:36.631 05:53:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:36.631 05:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.631 05:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.631 05:53:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.631 05:53:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.631 05:53:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.891 { 00:06:36.891 "nbd_device": "/dev/nbd0", 00:06:36.891 "bdev_name": "Malloc0" 00:06:36.891 }, 00:06:36.891 { 00:06:36.891 "nbd_device": "/dev/nbd1", 00:06:36.891 "bdev_name": "Malloc1" 00:06:36.891 } 00:06:36.891 ]' 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.891 { 00:06:36.891 "nbd_device": "/dev/nbd0", 00:06:36.891 "bdev_name": "Malloc0" 00:06:36.891 }, 00:06:36.891 { 00:06:36.891 "nbd_device": "/dev/nbd1", 00:06:36.891 "bdev_name": "Malloc1" 00:06:36.891 } 00:06:36.891 ]' 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.891 /dev/nbd1' 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.891 /dev/nbd1' 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.891 256+0 records in 00:06:36.891 256+0 records out 00:06:36.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010397 s, 101 MB/s 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.891 256+0 records in 00:06:36.891 256+0 records out 00:06:36.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223431 s, 46.9 MB/s 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.891 256+0 records in 00:06:36.891 256+0 records out 00:06:36.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329011 s, 31.9 MB/s 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.891 05:53:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.151 05:53:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.151 05:53:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.151 05:53:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.151 05:53:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.151 05:53:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.151 05:53:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.151 05:53:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.151 05:53:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.151 05:53:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.151 05:53:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.410 05:53:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.410 05:53:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.410 05:53:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.410 05:53:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.410 05:53:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.410 05:53:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.410 05:53:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.410 05:53:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.410 05:53:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.410 05:53:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.410 05:53:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.669 05:53:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.669 05:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.669 05:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.669 05:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.669 05:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.669 05:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.669 05:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.669 05:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.669 05:53:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.669 05:53:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.669 05:53:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.669 05:53:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.669 05:53:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.928 05:53:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:38.187 [2024-07-13 05:53:29.716234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.187 [2024-07-13 05:53:29.749353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.187 [2024-07-13 05:53:29.749364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.187 [2024-07-13 05:53:29.779935] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.187 [2024-07-13 05:53:29.779993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.187 [2024-07-13 05:53:29.780006] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.476 05:53:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72177 /var/tmp/spdk-nbd.sock 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72177 ']' 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:41.476 05:53:32 event.app_repeat -- event/event.sh@39 -- # killprocess 72177 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 72177 ']' 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 72177 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72177 00:06:41.476 killing process with pid 72177 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72177' 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@967 -- # kill 72177 00:06:41.476 05:53:32 event.app_repeat -- common/autotest_common.sh@972 -- # wait 72177 00:06:41.476 spdk_app_start is called in Round 0. 00:06:41.476 Shutdown signal received, stop current app iteration 00:06:41.476 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:41.476 spdk_app_start is called in Round 1. 00:06:41.476 Shutdown signal received, stop current app iteration 00:06:41.476 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:41.476 spdk_app_start is called in Round 2. 00:06:41.476 Shutdown signal received, stop current app iteration 00:06:41.476 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:41.476 spdk_app_start is called in Round 3. 00:06:41.476 Shutdown signal received, stop current app iteration 00:06:41.476 05:53:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:41.476 05:53:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:41.476 00:06:41.476 real 0m18.008s 00:06:41.476 user 0m40.859s 00:06:41.476 sys 0m2.597s 00:06:41.476 05:53:33 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.476 ************************************ 00:06:41.476 END TEST app_repeat 00:06:41.476 05:53:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.476 ************************************ 00:06:41.476 05:53:33 event -- common/autotest_common.sh@1142 -- # return 0 00:06:41.476 05:53:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:41.476 05:53:33 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:41.476 05:53:33 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.476 05:53:33 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.476 05:53:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.476 ************************************ 00:06:41.476 START TEST cpu_locks 00:06:41.476 ************************************ 00:06:41.476 05:53:33 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:41.476 * Looking for test storage... 00:06:41.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:41.476 05:53:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:41.477 05:53:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:41.477 05:53:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:41.477 05:53:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:41.477 05:53:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.477 05:53:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.477 05:53:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.477 ************************************ 00:06:41.477 START TEST default_locks 00:06:41.477 ************************************ 00:06:41.477 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:41.477 05:53:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72596 00:06:41.477 05:53:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.477 05:53:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72596 00:06:41.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.477 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 72596 ']' 00:06:41.477 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.477 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.477 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.477 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.477 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.744 [2024-07-13 05:53:33.204017] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:41.745 [2024-07-13 05:53:33.204086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72596 ] 00:06:41.745 [2024-07-13 05:53:33.334340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.745 [2024-07-13 05:53:33.370513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.745 [2024-07-13 05:53:33.399908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.006 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.006 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:42.006 05:53:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72596 00:06:42.006 05:53:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72596 00:06:42.006 05:53:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.265 05:53:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72596 00:06:42.265 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 72596 ']' 00:06:42.265 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 72596 00:06:42.265 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:42.265 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.265 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72596 00:06:42.265 killing process with pid 72596 00:06:42.265 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.265 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.265 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72596' 00:06:42.265 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 72596 00:06:42.265 05:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 72596 00:06:42.524 05:53:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72596 00:06:42.524 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:42.524 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 72596 00:06:42.524 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:42.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.525 ERROR: process (pid: 72596) is no longer running 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 72596 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 72596 ']' 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.525 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (72596) - No such process 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:42.525 ************************************ 00:06:42.525 END TEST default_locks 00:06:42.525 ************************************ 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:42.525 00:06:42.525 real 0m1.081s 00:06:42.525 user 0m1.131s 00:06:42.525 sys 0m0.412s 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.525 05:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.784 05:53:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:42.784 05:53:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:42.784 05:53:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.784 05:53:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.784 05:53:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.784 ************************************ 00:06:42.784 START TEST default_locks_via_rpc 00:06:42.784 ************************************ 00:06:42.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.784 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:42.784 05:53:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72635 00:06:42.784 05:53:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72635 00:06:42.784 05:53:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.784 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 72635 ']' 00:06:42.784 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.784 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.784 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.784 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.784 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.785 [2024-07-13 05:53:34.344296] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:42.785 [2024-07-13 05:53:34.344405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72635 ] 00:06:42.785 [2024-07-13 05:53:34.482524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.044 [2024-07-13 05:53:34.522890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.044 [2024-07-13 05:53:34.553285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72635 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72635 00:06:43.044 05:53:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.612 05:53:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72635 00:06:43.612 05:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 72635 ']' 00:06:43.612 05:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 72635 00:06:43.612 05:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:43.612 05:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.612 05:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72635 00:06:43.612 killing process with pid 72635 00:06:43.612 05:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.612 05:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.612 05:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72635' 00:06:43.612 05:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 72635 00:06:43.612 05:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 72635 00:06:43.612 ************************************ 00:06:43.612 END TEST default_locks_via_rpc 00:06:43.612 ************************************ 00:06:43.612 00:06:43.612 real 0m1.011s 00:06:43.612 user 0m1.015s 00:06:43.612 sys 0m0.386s 00:06:43.612 05:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.612 05:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.871 05:53:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:43.871 05:53:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:43.871 05:53:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.871 05:53:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.871 05:53:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.871 ************************************ 00:06:43.871 START TEST non_locking_app_on_locked_coremask 00:06:43.871 ************************************ 00:06:43.871 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:43.871 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72679 00:06:43.871 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.871 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72679 /var/tmp/spdk.sock 00:06:43.871 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72679 ']' 00:06:43.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.871 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.871 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.871 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.871 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.871 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.871 [2024-07-13 05:53:35.410244] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:43.871 [2024-07-13 05:53:35.410342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72679 ] 00:06:43.871 [2024-07-13 05:53:35.547232] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.871 [2024-07-13 05:53:35.580842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.130 [2024-07-13 05:53:35.608431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.130 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.130 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:44.130 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72682 00:06:44.130 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72682 /var/tmp/spdk2.sock 00:06:44.130 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72682 ']' 00:06:44.130 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.130 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.130 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.130 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.130 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.130 05:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:44.130 [2024-07-13 05:53:35.780166] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:44.130 [2024-07-13 05:53:35.780261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72682 ] 00:06:44.389 [2024-07-13 05:53:35.918905] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.389 [2024-07-13 05:53:35.918965] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.389 [2024-07-13 05:53:35.987430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.389 [2024-07-13 05:53:36.041629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.957 05:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.957 05:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:44.957 05:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72679 00:06:44.957 05:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72679 00:06:45.216 05:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.153 05:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72679 00:06:46.153 05:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72679 ']' 00:06:46.153 05:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 72679 00:06:46.153 05:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:46.153 05:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.153 05:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72679 00:06:46.153 killing process with pid 72679 00:06:46.153 05:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.153 05:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.153 05:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72679' 00:06:46.153 05:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 72679 00:06:46.153 05:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 72679 00:06:46.411 05:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72682 00:06:46.411 05:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72682 ']' 00:06:46.411 05:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 72682 00:06:46.411 05:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:46.411 05:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.411 05:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72682 00:06:46.411 05:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.411 05:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.411 killing process with pid 72682 00:06:46.411 05:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72682' 00:06:46.411 05:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 72682 00:06:46.411 05:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 72682 00:06:46.671 00:06:46.671 real 0m2.903s 00:06:46.671 user 0m3.315s 00:06:46.671 sys 0m0.870s 00:06:46.671 05:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.671 05:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.671 ************************************ 00:06:46.671 END TEST non_locking_app_on_locked_coremask 00:06:46.671 ************************************ 00:06:46.671 05:53:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:46.671 05:53:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:46.671 05:53:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.671 05:53:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.671 05:53:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.671 ************************************ 00:06:46.671 START TEST locking_app_on_unlocked_coremask 00:06:46.671 ************************************ 00:06:46.671 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:46.671 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72743 00:06:46.671 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72743 /var/tmp/spdk.sock 00:06:46.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.671 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72743 ']' 00:06:46.671 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:46.671 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.671 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.671 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.671 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.671 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.671 [2024-07-13 05:53:38.376432] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:46.671 [2024-07-13 05:53:38.376556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72743 ] 00:06:46.930 [2024-07-13 05:53:38.514229] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.930 [2024-07-13 05:53:38.514263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.930 [2024-07-13 05:53:38.547814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.930 [2024-07-13 05:53:38.574314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.189 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.189 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:47.189 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72752 00:06:47.189 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.189 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72752 /var/tmp/spdk2.sock 00:06:47.189 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72752 ']' 00:06:47.189 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.189 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.189 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.189 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.189 05:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.189 [2024-07-13 05:53:38.749121] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:47.189 [2024-07-13 05:53:38.749490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72752 ] 00:06:47.189 [2024-07-13 05:53:38.888524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.448 [2024-07-13 05:53:38.964772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.448 [2024-07-13 05:53:39.029280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.016 05:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.016 05:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:48.016 05:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72752 00:06:48.016 05:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72752 00:06:48.016 05:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.952 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72743 00:06:48.952 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72743 ']' 00:06:48.952 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 72743 00:06:48.952 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:48.952 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.952 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72743 00:06:48.952 killing process with pid 72743 00:06:48.952 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.952 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.952 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72743' 00:06:48.952 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 72743 00:06:48.952 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 72743 00:06:49.519 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72752 00:06:49.519 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72752 ']' 00:06:49.519 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 72752 00:06:49.519 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:49.519 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.519 05:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72752 00:06:49.519 killing process with pid 72752 00:06:49.519 05:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.519 05:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.519 05:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72752' 00:06:49.519 05:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 72752 00:06:49.519 05:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 72752 00:06:49.519 00:06:49.519 real 0m2.908s 00:06:49.519 user 0m3.387s 00:06:49.519 sys 0m0.869s 00:06:49.519 05:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.519 05:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.519 ************************************ 00:06:49.519 END TEST locking_app_on_unlocked_coremask 00:06:49.519 ************************************ 00:06:49.778 05:53:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:49.778 05:53:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:49.778 05:53:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.778 05:53:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.778 05:53:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.778 ************************************ 00:06:49.778 START TEST locking_app_on_locked_coremask 00:06:49.778 ************************************ 00:06:49.778 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:49.778 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72812 00:06:49.778 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72812 /var/tmp/spdk.sock 00:06:49.778 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.778 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72812 ']' 00:06:49.778 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.778 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.778 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.778 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.778 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.778 [2024-07-13 05:53:41.334550] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:49.778 [2024-07-13 05:53:41.334651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72812 ] 00:06:49.778 [2024-07-13 05:53:41.471839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.037 [2024-07-13 05:53:41.508629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.037 [2024-07-13 05:53:41.536996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72822 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72822 /var/tmp/spdk2.sock 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 72822 /var/tmp/spdk2.sock 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 72822 /var/tmp/spdk2.sock 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72822 ']' 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.037 05:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.037 [2024-07-13 05:53:41.708052] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:50.037 [2024-07-13 05:53:41.708345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72822 ] 00:06:50.296 [2024-07-13 05:53:41.850525] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72812 has claimed it. 00:06:50.296 [2024-07-13 05:53:41.850621] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.863 ERROR: process (pid: 72822) is no longer running 00:06:50.863 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (72822) - No such process 00:06:50.863 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.863 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:50.863 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:50.863 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.863 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.863 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.863 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72812 00:06:50.863 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72812 00:06:50.863 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.122 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72812 00:06:51.122 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72812 ']' 00:06:51.122 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 72812 00:06:51.122 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:51.122 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.122 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72812 00:06:51.122 killing process with pid 72812 00:06:51.122 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.122 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.122 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72812' 00:06:51.122 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 72812 00:06:51.122 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 72812 00:06:51.381 00:06:51.381 real 0m1.718s 00:06:51.381 user 0m1.989s 00:06:51.381 sys 0m0.447s 00:06:51.381 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.381 ************************************ 00:06:51.381 END TEST locking_app_on_locked_coremask 00:06:51.381 ************************************ 00:06:51.381 05:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.381 05:53:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:51.381 05:53:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:51.381 05:53:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.381 05:53:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.381 05:53:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.381 ************************************ 00:06:51.381 START TEST locking_overlapped_coremask 00:06:51.381 ************************************ 00:06:51.381 05:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:51.381 05:53:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72862 00:06:51.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.381 05:53:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72862 /var/tmp/spdk.sock 00:06:51.381 05:53:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:51.381 05:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 72862 ']' 00:06:51.381 05:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.381 05:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.381 05:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.381 05:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.381 05:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.638 [2024-07-13 05:53:43.113044] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:51.638 [2024-07-13 05:53:43.113141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72862 ] 00:06:51.638 [2024-07-13 05:53:43.246761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.638 [2024-07-13 05:53:43.281550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.639 [2024-07-13 05:53:43.281607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.639 [2024-07-13 05:53:43.281613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.639 [2024-07-13 05:53:43.310973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72880 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72880 /var/tmp/spdk2.sock 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 72880 /var/tmp/spdk2.sock 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 72880 /var/tmp/spdk2.sock 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 72880 ']' 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.573 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.573 [2024-07-13 05:53:44.106658] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:52.573 [2024-07-13 05:53:44.106754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72880 ] 00:06:52.573 [2024-07-13 05:53:44.251116] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72862 has claimed it. 00:06:52.573 [2024-07-13 05:53:44.251366] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:53.138 ERROR: process (pid: 72880) is no longer running 00:06:53.138 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (72880) - No such process 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72862 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 72862 ']' 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 72862 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72862 00:06:53.138 killing process with pid 72862 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72862' 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 72862 00:06:53.138 05:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 72862 00:06:53.396 00:06:53.396 real 0m2.022s 00:06:53.396 user 0m5.868s 00:06:53.396 sys 0m0.306s 00:06:53.396 05:53:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.396 05:53:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.396 ************************************ 00:06:53.396 END TEST locking_overlapped_coremask 00:06:53.396 ************************************ 00:06:53.396 05:53:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:53.396 05:53:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:53.396 05:53:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.396 05:53:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.396 05:53:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.664 ************************************ 00:06:53.664 START TEST locking_overlapped_coremask_via_rpc 00:06:53.664 ************************************ 00:06:53.664 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:53.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.664 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72920 00:06:53.664 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:53.664 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72920 /var/tmp/spdk.sock 00:06:53.664 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 72920 ']' 00:06:53.664 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.664 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.664 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.664 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.664 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.664 [2024-07-13 05:53:45.172537] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:53.664 [2024-07-13 05:53:45.172829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72920 ] 00:06:53.664 [2024-07-13 05:53:45.309336] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.664 [2024-07-13 05:53:45.309711] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.664 [2024-07-13 05:53:45.347085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.664 [2024-07-13 05:53:45.347232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.664 [2024-07-13 05:53:45.347236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.664 [2024-07-13 05:53:45.377665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.934 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.934 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:53.934 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72930 00:06:53.934 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72930 /var/tmp/spdk2.sock 00:06:53.934 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:53.934 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 72930 ']' 00:06:53.934 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.934 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.935 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.935 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.935 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.935 [2024-07-13 05:53:45.571641] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:53.935 [2024-07-13 05:53:45.571738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72930 ] 00:06:54.192 [2024-07-13 05:53:45.718303] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.192 [2024-07-13 05:53:45.722403] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.192 [2024-07-13 05:53:45.798816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.192 [2024-07-13 05:53:45.798928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.192 [2024-07-13 05:53:45.798928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:54.192 [2024-07-13 05:53:45.858798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.127 [2024-07-13 05:53:46.510576] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72920 has claimed it. 00:06:55.127 request: 00:06:55.127 { 00:06:55.127 "method": "framework_enable_cpumask_locks", 00:06:55.127 "req_id": 1 00:06:55.127 } 00:06:55.127 Got JSON-RPC error response 00:06:55.127 response: 00:06:55.127 { 00:06:55.127 "code": -32603, 00:06:55.127 "message": "Failed to claim CPU core: 2" 00:06:55.127 } 00:06:55.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72920 /var/tmp/spdk.sock 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 72920 ']' 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72930 /var/tmp/spdk2.sock 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 72930 ']' 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.127 05:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.385 05:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.385 05:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:55.385 05:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:55.385 05:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:55.385 05:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:55.385 05:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:55.385 00:06:55.385 real 0m1.952s 00:06:55.385 user 0m1.109s 00:06:55.385 sys 0m0.155s 00:06:55.385 05:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.385 05:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.385 ************************************ 00:06:55.385 END TEST locking_overlapped_coremask_via_rpc 00:06:55.385 ************************************ 00:06:55.385 05:53:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:55.385 05:53:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:55.385 05:53:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72920 ]] 00:06:55.385 05:53:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72920 00:06:55.385 05:53:47 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 72920 ']' 00:06:55.385 05:53:47 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 72920 00:06:55.385 05:53:47 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:55.385 05:53:47 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.644 05:53:47 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72920 00:06:55.644 killing process with pid 72920 00:06:55.644 05:53:47 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.644 05:53:47 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.644 05:53:47 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72920' 00:06:55.644 05:53:47 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 72920 00:06:55.644 05:53:47 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 72920 00:06:55.644 05:53:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72930 ]] 00:06:55.644 05:53:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72930 00:06:55.644 05:53:47 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 72930 ']' 00:06:55.644 05:53:47 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 72930 00:06:55.644 05:53:47 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:55.644 05:53:47 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.644 05:53:47 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72930 00:06:55.903 killing process with pid 72930 00:06:55.903 05:53:47 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:55.903 05:53:47 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:55.903 05:53:47 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72930' 00:06:55.903 05:53:47 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 72930 00:06:55.903 05:53:47 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 72930 00:06:55.903 05:53:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.903 Process with pid 72920 is not found 00:06:55.903 Process with pid 72930 is not found 00:06:55.903 05:53:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:55.903 05:53:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72920 ]] 00:06:55.903 05:53:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72920 00:06:55.903 05:53:47 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 72920 ']' 00:06:55.903 05:53:47 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 72920 00:06:55.903 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (72920) - No such process 00:06:55.903 05:53:47 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 72920 is not found' 00:06:55.903 05:53:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72930 ]] 00:06:55.903 05:53:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72930 00:06:55.903 05:53:47 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 72930 ']' 00:06:55.903 05:53:47 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 72930 00:06:55.903 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (72930) - No such process 00:06:55.903 05:53:47 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 72930 is not found' 00:06:55.903 05:53:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.903 00:06:55.903 real 0m14.550s 00:06:55.903 user 0m27.470s 00:06:55.903 sys 0m4.082s 00:06:55.903 05:53:47 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.903 05:53:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.903 ************************************ 00:06:55.903 END TEST cpu_locks 00:06:55.903 ************************************ 00:06:56.162 05:53:47 event -- common/autotest_common.sh@1142 -- # return 0 00:06:56.162 00:06:56.162 real 0m38.911s 00:06:56.162 user 1m17.180s 00:06:56.162 sys 0m7.321s 00:06:56.162 ************************************ 00:06:56.162 END TEST event 00:06:56.162 ************************************ 00:06:56.162 05:53:47 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.162 05:53:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.162 05:53:47 -- common/autotest_common.sh@1142 -- # return 0 00:06:56.162 05:53:47 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:56.162 05:53:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.162 05:53:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.162 05:53:47 -- common/autotest_common.sh@10 -- # set +x 00:06:56.162 ************************************ 00:06:56.162 START TEST thread 00:06:56.162 ************************************ 00:06:56.162 05:53:47 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:56.162 * Looking for test storage... 00:06:56.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:56.162 05:53:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:56.162 05:53:47 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:56.162 05:53:47 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.162 05:53:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.162 ************************************ 00:06:56.162 START TEST thread_poller_perf 00:06:56.162 ************************************ 00:06:56.162 05:53:47 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:56.162 [2024-07-13 05:53:47.805287] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:56.163 [2024-07-13 05:53:47.805624] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73053 ] 00:06:56.421 [2024-07-13 05:53:47.942518] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.421 [2024-07-13 05:53:47.973927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.421 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:57.358 ====================================== 00:06:57.358 busy:2211068610 (cyc) 00:06:57.358 total_run_count: 370000 00:06:57.358 tsc_hz: 2200000000 (cyc) 00:06:57.358 ====================================== 00:06:57.358 poller_cost: 5975 (cyc), 2715 (nsec) 00:06:57.358 00:06:57.358 real 0m1.250s 00:06:57.358 user 0m1.101s 00:06:57.358 sys 0m0.042s 00:06:57.358 05:53:49 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.358 05:53:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.358 ************************************ 00:06:57.358 END TEST thread_poller_perf 00:06:57.358 ************************************ 00:06:57.358 05:53:49 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:57.358 05:53:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:57.358 05:53:49 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:57.358 05:53:49 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.358 05:53:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.617 ************************************ 00:06:57.617 START TEST thread_poller_perf 00:06:57.617 ************************************ 00:06:57.617 05:53:49 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:57.617 [2024-07-13 05:53:49.103173] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:57.617 [2024-07-13 05:53:49.103421] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73083 ] 00:06:57.617 [2024-07-13 05:53:49.238973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.617 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:57.617 [2024-07-13 05:53:49.271427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.995 ====================================== 00:06:58.995 busy:2201793722 (cyc) 00:06:58.995 total_run_count: 4847000 00:06:58.995 tsc_hz: 2200000000 (cyc) 00:06:58.995 ====================================== 00:06:58.995 poller_cost: 454 (cyc), 206 (nsec) 00:06:58.995 00:06:58.995 real 0m1.237s 00:06:58.995 user 0m1.083s 00:06:58.995 sys 0m0.048s 00:06:58.995 ************************************ 00:06:58.995 END TEST thread_poller_perf 00:06:58.995 ************************************ 00:06:58.995 05:53:50 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.995 05:53:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:58.995 05:53:50 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:58.995 05:53:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:58.995 ************************************ 00:06:58.995 END TEST thread 00:06:58.995 ************************************ 00:06:58.995 00:06:58.995 real 0m2.672s 00:06:58.995 user 0m2.264s 00:06:58.995 sys 0m0.185s 00:06:58.995 05:53:50 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.995 05:53:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.995 05:53:50 -- common/autotest_common.sh@1142 -- # return 0 00:06:58.995 05:53:50 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:58.995 05:53:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.995 05:53:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.995 05:53:50 -- common/autotest_common.sh@10 -- # set +x 00:06:58.995 ************************************ 00:06:58.995 START TEST accel 00:06:58.995 ************************************ 00:06:58.995 05:53:50 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:58.995 * Looking for test storage... 00:06:58.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:58.995 05:53:50 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:58.995 05:53:50 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:58.995 05:53:50 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:58.995 05:53:50 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=73154 00:06:58.995 05:53:50 accel -- accel/accel.sh@63 -- # waitforlisten 73154 00:06:58.995 05:53:50 accel -- common/autotest_common.sh@829 -- # '[' -z 73154 ']' 00:06:58.995 05:53:50 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:58.995 05:53:50 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:58.995 05:53:50 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.995 05:53:50 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.995 05:53:50 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.995 05:53:50 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.995 05:53:50 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.995 05:53:50 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.995 05:53:50 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.995 05:53:50 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.995 05:53:50 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.995 05:53:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.995 05:53:50 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:58.995 05:53:50 accel -- accel/accel.sh@41 -- # jq -r . 00:06:58.995 [2024-07-13 05:53:50.566614] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:58.995 [2024-07-13 05:53:50.566708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73154 ] 00:06:58.995 [2024-07-13 05:53:50.703828] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.255 [2024-07-13 05:53:50.738678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.255 [2024-07-13 05:53:50.767651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@862 -- # return 0 00:06:59.255 05:53:50 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:59.255 05:53:50 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:59.255 05:53:50 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:59.255 05:53:50 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:59.255 05:53:50 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:59.255 05:53:50 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.255 05:53:50 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.255 05:53:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.255 05:53:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.255 05:53:50 accel -- accel/accel.sh@75 -- # killprocess 73154 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@948 -- # '[' -z 73154 ']' 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@952 -- # kill -0 73154 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@953 -- # uname 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73154 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.255 killing process with pid 73154 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73154' 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@967 -- # kill 73154 00:06:59.255 05:53:50 accel -- common/autotest_common.sh@972 -- # wait 73154 00:06:59.515 05:53:51 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:59.515 05:53:51 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:59.515 05:53:51 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:59.515 05:53:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.515 05:53:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.515 05:53:51 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:59.515 05:53:51 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:59.515 05:53:51 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:59.515 05:53:51 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.515 05:53:51 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.515 05:53:51 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.515 05:53:51 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.515 05:53:51 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.515 05:53:51 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:59.515 05:53:51 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:59.515 05:53:51 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.515 05:53:51 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:59.774 05:53:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.774 05:53:51 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:59.774 05:53:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:59.774 05:53:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.774 05:53:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.774 ************************************ 00:06:59.774 START TEST accel_missing_filename 00:06:59.774 ************************************ 00:06:59.774 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:59.774 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:59.774 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:59.774 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:59.774 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.774 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:59.774 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.774 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:59.774 05:53:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:59.774 05:53:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:59.774 05:53:51 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.774 05:53:51 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.774 05:53:51 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.774 05:53:51 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.774 05:53:51 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.774 05:53:51 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:59.774 05:53:51 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:59.774 [2024-07-13 05:53:51.280932] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:59.774 [2024-07-13 05:53:51.281031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73198 ] 00:06:59.774 [2024-07-13 05:53:51.417566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.774 [2024-07-13 05:53:51.459800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.774 [2024-07-13 05:53:51.490728] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.034 [2024-07-13 05:53:51.536962] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:00.034 A filename is required. 00:07:00.034 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:00.034 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.034 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:00.034 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:00.034 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:00.034 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.034 00:07:00.034 real 0m0.331s 00:07:00.034 user 0m0.198s 00:07:00.034 sys 0m0.080s 00:07:00.034 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.034 05:53:51 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:00.034 ************************************ 00:07:00.034 END TEST accel_missing_filename 00:07:00.034 ************************************ 00:07:00.034 05:53:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.034 05:53:51 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:00.034 05:53:51 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:00.034 05:53:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.034 05:53:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.034 ************************************ 00:07:00.034 START TEST accel_compress_verify 00:07:00.034 ************************************ 00:07:00.034 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:00.034 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:00.034 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:00.034 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:00.034 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.034 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:00.034 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.034 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:00.034 05:53:51 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:00.034 05:53:51 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:00.034 05:53:51 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.034 05:53:51 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.034 05:53:51 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.034 05:53:51 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.034 05:53:51 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.034 05:53:51 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:00.034 05:53:51 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:00.034 [2024-07-13 05:53:51.666192] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:00.034 [2024-07-13 05:53:51.666272] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73217 ] 00:07:00.293 [2024-07-13 05:53:51.793030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.293 [2024-07-13 05:53:51.832565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.293 [2024-07-13 05:53:51.869169] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.293 [2024-07-13 05:53:51.909511] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:00.293 00:07:00.293 Compression does not support the verify option, aborting. 00:07:00.293 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:00.293 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.293 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:00.293 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:00.293 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:00.293 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.293 00:07:00.293 real 0m0.319s 00:07:00.293 user 0m0.196s 00:07:00.293 sys 0m0.070s 00:07:00.293 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.293 ************************************ 00:07:00.293 END TEST accel_compress_verify 00:07:00.293 ************************************ 00:07:00.293 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:00.293 05:53:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.293 05:53:51 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:00.293 05:53:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:00.293 05:53:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.293 05:53:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.293 ************************************ 00:07:00.293 START TEST accel_wrong_workload 00:07:00.293 ************************************ 00:07:00.293 05:53:52 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:00.293 05:53:52 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:00.294 05:53:52 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:00.294 05:53:52 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:00.294 05:53:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.294 05:53:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:00.294 05:53:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.294 05:53:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:00.294 05:53:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:00.294 05:53:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:00.294 05:53:52 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.294 05:53:52 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.294 05:53:52 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.294 05:53:52 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.294 05:53:52 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.294 05:53:52 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:00.294 05:53:52 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:00.553 Unsupported workload type: foobar 00:07:00.553 [2024-07-13 05:53:52.027447] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:00.553 accel_perf options: 00:07:00.553 [-h help message] 00:07:00.553 [-q queue depth per core] 00:07:00.553 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:00.553 [-T number of threads per core 00:07:00.553 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:00.553 [-t time in seconds] 00:07:00.553 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:00.553 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:00.553 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:00.553 [-l for compress/decompress workloads, name of uncompressed input file 00:07:00.553 [-S for crc32c workload, use this seed value (default 0) 00:07:00.553 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:00.553 [-f for fill workload, use this BYTE value (default 255) 00:07:00.553 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:00.553 [-y verify result if this switch is on] 00:07:00.553 [-a tasks to allocate per core (default: same value as -q)] 00:07:00.553 Can be used to spread operations across a wider range of memory. 00:07:00.553 05:53:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:00.553 05:53:52 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.553 05:53:52 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.553 05:53:52 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.553 00:07:00.553 real 0m0.027s 00:07:00.553 user 0m0.017s 00:07:00.553 sys 0m0.010s 00:07:00.553 05:53:52 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.553 05:53:52 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:00.553 ************************************ 00:07:00.553 END TEST accel_wrong_workload 00:07:00.553 ************************************ 00:07:00.553 05:53:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.553 05:53:52 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:00.553 05:53:52 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:00.553 05:53:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.553 05:53:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.553 ************************************ 00:07:00.553 START TEST accel_negative_buffers 00:07:00.553 ************************************ 00:07:00.553 05:53:52 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:00.553 05:53:52 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:00.553 05:53:52 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:00.553 05:53:52 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:00.553 05:53:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.553 05:53:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:00.553 05:53:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.553 05:53:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:00.553 05:53:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:00.553 05:53:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:00.553 05:53:52 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.553 05:53:52 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.553 05:53:52 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.553 05:53:52 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.553 05:53:52 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.553 05:53:52 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:00.553 05:53:52 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:00.553 -x option must be non-negative. 00:07:00.553 [2024-07-13 05:53:52.095775] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:00.553 accel_perf options: 00:07:00.553 [-h help message] 00:07:00.553 [-q queue depth per core] 00:07:00.553 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:00.553 [-T number of threads per core 00:07:00.553 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:00.553 [-t time in seconds] 00:07:00.553 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:00.553 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:00.553 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:00.554 [-l for compress/decompress workloads, name of uncompressed input file 00:07:00.554 [-S for crc32c workload, use this seed value (default 0) 00:07:00.554 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:00.554 [-f for fill workload, use this BYTE value (default 255) 00:07:00.554 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:00.554 [-y verify result if this switch is on] 00:07:00.554 [-a tasks to allocate per core (default: same value as -q)] 00:07:00.554 Can be used to spread operations across a wider range of memory. 00:07:00.554 05:53:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:00.554 05:53:52 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.554 05:53:52 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.554 05:53:52 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.554 00:07:00.554 real 0m0.025s 00:07:00.554 user 0m0.015s 00:07:00.554 sys 0m0.010s 00:07:00.554 05:53:52 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.554 ************************************ 00:07:00.554 END TEST accel_negative_buffers 00:07:00.554 ************************************ 00:07:00.554 05:53:52 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:00.554 05:53:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.554 05:53:52 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:00.554 05:53:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:00.554 05:53:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.554 05:53:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.554 ************************************ 00:07:00.554 START TEST accel_crc32c 00:07:00.554 ************************************ 00:07:00.554 05:53:52 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:00.554 05:53:52 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:00.554 05:53:52 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:00.554 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.554 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.554 05:53:52 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:00.554 05:53:52 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:00.554 05:53:52 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:00.554 05:53:52 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.554 05:53:52 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.554 05:53:52 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.554 05:53:52 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.554 05:53:52 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.554 05:53:52 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:00.554 05:53:52 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:00.554 [2024-07-13 05:53:52.162890] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:00.554 [2024-07-13 05:53:52.162972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73281 ] 00:07:00.814 [2024-07-13 05:53:52.289992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.814 [2024-07-13 05:53:52.321094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.814 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:01.750 05:53:53 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.750 00:07:01.750 real 0m1.291s 00:07:01.750 user 0m1.131s 00:07:01.750 sys 0m0.070s 00:07:01.750 05:53:53 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.750 05:53:53 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:01.750 ************************************ 00:07:01.750 END TEST accel_crc32c 00:07:01.750 ************************************ 00:07:01.750 05:53:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.750 05:53:53 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:02.010 05:53:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:02.010 05:53:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.010 05:53:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.010 ************************************ 00:07:02.010 START TEST accel_crc32c_C2 00:07:02.010 ************************************ 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:02.010 [2024-07-13 05:53:53.508561] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:02.010 [2024-07-13 05:53:53.508716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73310 ] 00:07:02.010 [2024-07-13 05:53:53.642128] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.010 [2024-07-13 05:53:53.672589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:02.010 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.011 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.389 00:07:03.389 real 0m1.307s 00:07:03.389 user 0m1.155s 00:07:03.389 sys 0m0.064s 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.389 05:53:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:03.389 ************************************ 00:07:03.389 END TEST accel_crc32c_C2 00:07:03.389 ************************************ 00:07:03.389 05:53:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.389 05:53:54 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:03.389 05:53:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:03.389 05:53:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.389 05:53:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.389 ************************************ 00:07:03.389 START TEST accel_copy 00:07:03.389 ************************************ 00:07:03.389 05:53:54 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:03.389 05:53:54 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:03.389 05:53:54 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:03.389 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.389 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.389 05:53:54 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:03.389 05:53:54 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:03.389 05:53:54 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:03.389 05:53:54 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.389 05:53:54 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.389 05:53:54 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.390 05:53:54 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.390 05:53:54 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.390 05:53:54 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:03.390 05:53:54 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:03.390 [2024-07-13 05:53:54.860318] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:03.390 [2024-07-13 05:53:54.860476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73349 ] 00:07:03.390 [2024-07-13 05:53:54.989063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.390 [2024-07-13 05:53:55.019519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.390 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:04.767 05:53:56 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.767 00:07:04.767 real 0m1.300s 00:07:04.767 user 0m1.143s 00:07:04.767 sys 0m0.066s 00:07:04.767 05:53:56 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.767 05:53:56 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:04.767 ************************************ 00:07:04.767 END TEST accel_copy 00:07:04.767 ************************************ 00:07:04.767 05:53:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.767 05:53:56 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.767 05:53:56 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:04.767 05:53:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.767 05:53:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.767 ************************************ 00:07:04.767 START TEST accel_fill 00:07:04.767 ************************************ 00:07:04.767 05:53:56 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:04.767 [2024-07-13 05:53:56.208081] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:04.767 [2024-07-13 05:53:56.208162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73379 ] 00:07:04.767 [2024-07-13 05:53:56.336120] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.767 [2024-07-13 05:53:56.367258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.767 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.768 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:06.144 05:53:57 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.144 00:07:06.144 real 0m1.301s 00:07:06.144 user 0m1.142s 00:07:06.144 sys 0m0.066s 00:07:06.144 05:53:57 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.144 ************************************ 00:07:06.144 END TEST accel_fill 00:07:06.144 05:53:57 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:06.144 ************************************ 00:07:06.144 05:53:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.144 05:53:57 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:06.144 05:53:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:06.144 05:53:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.144 05:53:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.144 ************************************ 00:07:06.144 START TEST accel_copy_crc32c 00:07:06.144 ************************************ 00:07:06.144 05:53:57 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:06.144 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:06.144 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:06.144 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.144 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.144 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:06.145 [2024-07-13 05:53:57.562478] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:06.145 [2024-07-13 05:53:57.562573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73408 ] 00:07:06.145 [2024-07-13 05:53:57.697174] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.145 [2024-07-13 05:53:57.728030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.145 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.523 00:07:07.523 real 0m1.305s 00:07:07.523 user 0m1.151s 00:07:07.523 sys 0m0.061s 00:07:07.523 ************************************ 00:07:07.523 END TEST accel_copy_crc32c 00:07:07.523 ************************************ 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.523 05:53:58 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:07.523 05:53:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.523 05:53:58 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:07.523 05:53:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:07.523 05:53:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.523 05:53:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.523 ************************************ 00:07:07.523 START TEST accel_copy_crc32c_C2 00:07:07.523 ************************************ 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:07.523 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:07.524 [2024-07-13 05:53:58.920674] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:07.524 [2024-07-13 05:53:58.920796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73443 ] 00:07:07.524 [2024-07-13 05:53:59.053491] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.524 [2024-07-13 05:53:59.087061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.524 05:53:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.929 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.929 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.929 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.929 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.929 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.929 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.929 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.929 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.929 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.929 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.930 00:07:08.930 real 0m1.314s 00:07:08.930 user 0m1.163s 00:07:08.930 sys 0m0.058s 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.930 ************************************ 00:07:08.930 END TEST accel_copy_crc32c_C2 00:07:08.930 05:54:00 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:08.930 ************************************ 00:07:08.930 05:54:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.930 05:54:00 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:08.930 05:54:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:08.930 05:54:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.930 05:54:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.930 ************************************ 00:07:08.930 START TEST accel_dualcast 00:07:08.930 ************************************ 00:07:08.930 05:54:00 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:08.930 [2024-07-13 05:54:00.282977] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:08.930 [2024-07-13 05:54:00.283062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73477 ] 00:07:08.930 [2024-07-13 05:54:00.419008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.930 [2024-07-13 05:54:00.451870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.930 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.867 05:54:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.868 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.868 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.868 05:54:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.868 05:54:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:09.868 05:54:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.868 00:07:09.868 real 0m1.320s 00:07:09.868 user 0m1.160s 00:07:09.868 sys 0m0.066s 00:07:09.868 ************************************ 00:07:09.868 END TEST accel_dualcast 00:07:09.868 ************************************ 00:07:09.868 05:54:01 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.868 05:54:01 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:10.127 05:54:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.127 05:54:01 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:10.127 05:54:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.127 05:54:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.127 05:54:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.127 ************************************ 00:07:10.127 START TEST accel_compare 00:07:10.127 ************************************ 00:07:10.127 05:54:01 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:10.127 05:54:01 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:10.127 05:54:01 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:10.127 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.127 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.127 05:54:01 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:10.127 05:54:01 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:10.127 05:54:01 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:10.127 05:54:01 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.127 05:54:01 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.127 05:54:01 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.127 05:54:01 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.127 05:54:01 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.127 05:54:01 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:10.127 05:54:01 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:10.127 [2024-07-13 05:54:01.660946] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:10.127 [2024-07-13 05:54:01.661038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73506 ] 00:07:10.127 [2024-07-13 05:54:01.798522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.127 [2024-07-13 05:54:01.834855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.387 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.323 ************************************ 00:07:11.323 END TEST accel_compare 00:07:11.323 ************************************ 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:11.323 05:54:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.323 00:07:11.323 real 0m1.330s 00:07:11.323 user 0m1.162s 00:07:11.323 sys 0m0.077s 00:07:11.324 05:54:02 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.324 05:54:02 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:11.324 05:54:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.324 05:54:03 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:11.324 05:54:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:11.324 05:54:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.324 05:54:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.324 ************************************ 00:07:11.324 START TEST accel_xor 00:07:11.324 ************************************ 00:07:11.324 05:54:03 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:11.324 05:54:03 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:11.324 05:54:03 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:11.324 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.324 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.324 05:54:03 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:11.324 05:54:03 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:11.324 05:54:03 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:11.324 05:54:03 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.324 05:54:03 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.324 05:54:03 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.324 05:54:03 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.324 05:54:03 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.324 05:54:03 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:11.324 05:54:03 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:11.324 [2024-07-13 05:54:03.038987] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:11.324 [2024-07-13 05:54:03.039096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73541 ] 00:07:11.581 [2024-07-13 05:54:03.169684] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.581 [2024-07-13 05:54:03.203668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.581 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.582 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.957 00:07:12.957 real 0m1.319s 00:07:12.957 user 0m1.150s 00:07:12.957 sys 0m0.077s 00:07:12.957 05:54:04 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.957 ************************************ 00:07:12.957 END TEST accel_xor 00:07:12.957 ************************************ 00:07:12.957 05:54:04 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:12.957 05:54:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.957 05:54:04 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:12.957 05:54:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:12.957 05:54:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.957 05:54:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.957 ************************************ 00:07:12.957 START TEST accel_xor 00:07:12.957 ************************************ 00:07:12.957 05:54:04 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.957 05:54:04 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:12.958 [2024-07-13 05:54:04.412689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:12.958 [2024-07-13 05:54:04.412802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73578 ] 00:07:12.958 [2024-07-13 05:54:04.550149] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.958 [2024-07-13 05:54:04.583838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.958 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:14.333 05:54:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.333 00:07:14.333 real 0m1.320s 00:07:14.333 user 0m1.155s 00:07:14.333 sys 0m0.075s 00:07:14.333 05:54:05 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.333 05:54:05 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:14.333 ************************************ 00:07:14.333 END TEST accel_xor 00:07:14.333 ************************************ 00:07:14.333 05:54:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.333 05:54:05 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:14.333 05:54:05 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:14.333 05:54:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.333 05:54:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.333 ************************************ 00:07:14.333 START TEST accel_dif_verify 00:07:14.333 ************************************ 00:07:14.333 05:54:05 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:14.333 [2024-07-13 05:54:05.789183] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:14.333 [2024-07-13 05:54:05.789593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73607 ] 00:07:14.333 [2024-07-13 05:54:05.926995] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.333 [2024-07-13 05:54:05.960686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.333 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:15.708 05:54:07 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.708 00:07:15.708 real 0m1.318s 00:07:15.708 user 0m1.153s 00:07:15.708 sys 0m0.074s 00:07:15.708 05:54:07 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.708 ************************************ 00:07:15.708 END TEST accel_dif_verify 00:07:15.708 ************************************ 00:07:15.708 05:54:07 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:15.708 05:54:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.708 05:54:07 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:15.708 05:54:07 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:15.708 05:54:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.708 05:54:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.708 ************************************ 00:07:15.708 START TEST accel_dif_generate 00:07:15.708 ************************************ 00:07:15.708 05:54:07 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:15.708 [2024-07-13 05:54:07.157181] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:15.708 [2024-07-13 05:54:07.157291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73642 ] 00:07:15.708 [2024-07-13 05:54:07.295904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.708 [2024-07-13 05:54:07.331432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.708 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.709 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:17.086 05:54:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.086 00:07:17.086 real 0m1.320s 00:07:17.086 user 0m1.154s 00:07:17.086 sys 0m0.073s 00:07:17.086 05:54:08 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.086 ************************************ 00:07:17.086 END TEST accel_dif_generate 00:07:17.086 ************************************ 00:07:17.086 05:54:08 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:17.086 05:54:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.086 05:54:08 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:17.086 05:54:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:17.086 05:54:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.086 05:54:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.086 ************************************ 00:07:17.086 START TEST accel_dif_generate_copy 00:07:17.086 ************************************ 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:17.086 [2024-07-13 05:54:08.525260] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:17.086 [2024-07-13 05:54:08.525345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73676 ] 00:07:17.086 [2024-07-13 05:54:08.662623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.086 [2024-07-13 05:54:08.699607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.086 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.461 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.461 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.461 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.461 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.461 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.461 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.461 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.462 00:07:18.462 real 0m1.321s 00:07:18.462 user 0m1.154s 00:07:18.462 sys 0m0.076s 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.462 ************************************ 00:07:18.462 END TEST accel_dif_generate_copy 00:07:18.462 ************************************ 00:07:18.462 05:54:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:18.462 05:54:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.462 05:54:09 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:18.462 05:54:09 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.462 05:54:09 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:18.462 05:54:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.462 05:54:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.462 ************************************ 00:07:18.462 START TEST accel_comp 00:07:18.462 ************************************ 00:07:18.462 05:54:09 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.462 05:54:09 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:18.462 05:54:09 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:18.462 05:54:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:09 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.462 05:54:09 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.462 05:54:09 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:18.462 05:54:09 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.462 05:54:09 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.462 05:54:09 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.462 05:54:09 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.462 05:54:09 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.462 05:54:09 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:18.462 05:54:09 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:18.462 [2024-07-13 05:54:09.900431] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:18.462 [2024-07-13 05:54:09.901041] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73705 ] 00:07:18.462 [2024-07-13 05:54:10.038020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.462 [2024-07-13 05:54:10.074368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.462 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.837 05:54:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.837 05:54:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.837 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.837 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.837 05:54:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.837 05:54:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.837 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.837 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.837 05:54:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:19.838 05:54:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.838 00:07:19.838 real 0m1.322s 00:07:19.838 user 0m1.155s 00:07:19.838 sys 0m0.077s 00:07:19.838 05:54:11 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.838 ************************************ 00:07:19.838 END TEST accel_comp 00:07:19.838 ************************************ 00:07:19.838 05:54:11 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:19.838 05:54:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.838 05:54:11 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.838 05:54:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:19.838 05:54:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.838 05:54:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.838 ************************************ 00:07:19.838 START TEST accel_decomp 00:07:19.838 ************************************ 00:07:19.838 05:54:11 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:19.838 [2024-07-13 05:54:11.271103] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:19.838 [2024-07-13 05:54:11.271189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73741 ] 00:07:19.838 [2024-07-13 05:54:11.409513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.838 [2024-07-13 05:54:11.451533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.838 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.839 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.214 05:54:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.214 05:54:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.214 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.214 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.214 05:54:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.214 05:54:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.214 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.214 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.214 05:54:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.214 05:54:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.215 05:54:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.215 ************************************ 00:07:21.215 END TEST accel_decomp 00:07:21.215 ************************************ 00:07:21.215 00:07:21.215 real 0m1.338s 00:07:21.215 user 0m1.162s 00:07:21.215 sys 0m0.081s 00:07:21.215 05:54:12 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.215 05:54:12 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:21.215 05:54:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.215 05:54:12 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:21.215 05:54:12 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:21.215 05:54:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.215 05:54:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.215 ************************************ 00:07:21.215 START TEST accel_decomp_full 00:07:21.215 ************************************ 00:07:21.215 05:54:12 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:21.215 [2024-07-13 05:54:12.660431] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:21.215 [2024-07-13 05:54:12.660511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73770 ] 00:07:21.215 [2024-07-13 05:54:12.794473] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.215 [2024-07-13 05:54:12.826398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.215 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.216 05:54:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.592 ************************************ 00:07:22.592 END TEST accel_decomp_full 00:07:22.592 ************************************ 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.592 05:54:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.592 00:07:22.592 real 0m1.318s 00:07:22.592 user 0m1.149s 00:07:22.592 sys 0m0.078s 00:07:22.592 05:54:13 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.592 05:54:13 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:22.592 05:54:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.592 05:54:13 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:22.592 05:54:13 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:22.592 05:54:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.592 05:54:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.592 ************************************ 00:07:22.592 START TEST accel_decomp_mcore 00:07:22.592 ************************************ 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:22.592 [2024-07-13 05:54:14.025964] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:22.592 [2024-07-13 05:54:14.026050] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73804 ] 00:07:22.592 [2024-07-13 05:54:14.156419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.592 [2024-07-13 05:54:14.192633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.592 [2024-07-13 05:54:14.192791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.592 [2024-07-13 05:54:14.193958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.592 [2024-07-13 05:54:14.193977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.592 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.593 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.976 00:07:23.976 real 0m1.329s 00:07:23.976 user 0m4.373s 00:07:23.976 sys 0m0.078s 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.976 ************************************ 00:07:23.976 END TEST accel_decomp_mcore 00:07:23.976 ************************************ 00:07:23.976 05:54:15 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:23.976 05:54:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.976 05:54:15 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.976 05:54:15 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:23.976 05:54:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.976 05:54:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.976 ************************************ 00:07:23.976 START TEST accel_decomp_full_mcore 00:07:23.976 ************************************ 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:23.976 [2024-07-13 05:54:15.407149] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:23.976 [2024-07-13 05:54:15.407238] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73842 ] 00:07:23.976 [2024-07-13 05:54:15.539433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.976 [2024-07-13 05:54:15.577260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.976 [2024-07-13 05:54:15.577415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.976 [2024-07-13 05:54:15.577504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.976 [2024-07-13 05:54:15.577505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.976 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.977 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.362 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.363 00:07:25.363 real 0m1.344s 00:07:25.363 user 0m0.019s 00:07:25.363 sys 0m0.005s 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.363 05:54:16 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:25.363 ************************************ 00:07:25.363 END TEST accel_decomp_full_mcore 00:07:25.363 ************************************ 00:07:25.363 05:54:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.363 05:54:16 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.363 05:54:16 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:25.363 05:54:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.363 05:54:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.363 ************************************ 00:07:25.363 START TEST accel_decomp_mthread 00:07:25.363 ************************************ 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:25.363 [2024-07-13 05:54:16.798697] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:25.363 [2024-07-13 05:54:16.798795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73874 ] 00:07:25.363 [2024-07-13 05:54:16.928801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.363 [2024-07-13 05:54:16.961884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.363 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:25.364 05:54:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.364 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.743 00:07:26.743 real 0m1.311s 00:07:26.743 user 0m1.154s 00:07:26.743 sys 0m0.067s 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.743 05:54:18 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:26.743 ************************************ 00:07:26.743 END TEST accel_decomp_mthread 00:07:26.743 ************************************ 00:07:26.743 05:54:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.743 05:54:18 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:26.743 05:54:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:26.743 05:54:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.743 05:54:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.743 ************************************ 00:07:26.743 START TEST accel_decomp_full_mthread 00:07:26.743 ************************************ 00:07:26.743 05:54:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:26.743 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:26.743 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:26.743 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.743 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:26.744 [2024-07-13 05:54:18.165307] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:26.744 [2024-07-13 05:54:18.165414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73908 ] 00:07:26.744 [2024-07-13 05:54:18.302649] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.744 [2024-07-13 05:54:18.338594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.744 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.123 00:07:28.123 real 0m1.354s 00:07:28.123 user 0m1.193s 00:07:28.123 sys 0m0.071s 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.123 05:54:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:28.123 ************************************ 00:07:28.123 END TEST accel_decomp_full_mthread 00:07:28.123 ************************************ 00:07:28.123 05:54:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.123 05:54:19 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:28.123 05:54:19 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:28.123 05:54:19 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:28.123 05:54:19 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.123 05:54:19 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:28.123 05:54:19 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.123 05:54:19 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.123 05:54:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.123 05:54:19 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.123 05:54:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.123 05:54:19 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.123 05:54:19 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:28.123 05:54:19 accel -- accel/accel.sh@41 -- # jq -r . 00:07:28.123 ************************************ 00:07:28.123 START TEST accel_dif_functional_tests 00:07:28.123 ************************************ 00:07:28.123 05:54:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:28.123 [2024-07-13 05:54:19.602038] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:28.123 [2024-07-13 05:54:19.602130] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73944 ] 00:07:28.123 [2024-07-13 05:54:19.739305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.123 [2024-07-13 05:54:19.772934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.123 [2024-07-13 05:54:19.773068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.123 [2024-07-13 05:54:19.773070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.124 [2024-07-13 05:54:19.802208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.124 00:07:28.124 00:07:28.124 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.124 http://cunit.sourceforge.net/ 00:07:28.124 00:07:28.124 00:07:28.124 Suite: accel_dif 00:07:28.124 Test: verify: DIF generated, GUARD check ...passed 00:07:28.124 Test: verify: DIF generated, APPTAG check ...passed 00:07:28.124 Test: verify: DIF generated, REFTAG check ...passed 00:07:28.124 Test: verify: DIF not generated, GUARD check ...[2024-07-13 05:54:19.820356] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:28.124 passed 00:07:28.124 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 05:54:19.820772] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:28.124 passed 00:07:28.124 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 05:54:19.821118] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5apassed5a 00:07:28.124 00:07:28.124 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:28.124 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 05:54:19.821680] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:28.124 passed 00:07:28.124 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:28.124 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:28.124 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:28.124 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 05:54:19.822441] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:28.124 passed 00:07:28.124 Test: verify copy: DIF generated, GUARD check ...passed 00:07:28.124 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:28.124 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:28.124 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:28.124 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 05:54:19.823004] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:28.124 [2024-07-13 05:54:19.823240] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:28.124 passed 00:07:28.124 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 05:54:19.823497] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:28.124 passed 00:07:28.124 Test: generate copy: DIF generated, GUARD check ...passed 00:07:28.124 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:28.124 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:28.124 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:28.124 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:28.124 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:28.124 Test: generate copy: iovecs-len validate ...[2024-07-13 05:54:19.824495] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:28.124 passed 00:07:28.124 Test: generate copy: buffer alignment validate ...passed 00:07:28.124 00:07:28.124 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.124 suites 1 1 n/a 0 0 00:07:28.124 tests 26 26 26 0 0 00:07:28.124 asserts 115 115 115 0 n/a 00:07:28.124 00:07:28.124 Elapsed time = 0.010 seconds 00:07:28.383 ************************************ 00:07:28.383 END TEST accel_dif_functional_tests 00:07:28.383 ************************************ 00:07:28.383 00:07:28.383 real 0m0.412s 00:07:28.383 user 0m0.472s 00:07:28.383 sys 0m0.099s 00:07:28.383 05:54:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.383 05:54:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:28.383 05:54:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.383 00:07:28.383 real 0m29.589s 00:07:28.383 user 0m31.806s 00:07:28.383 sys 0m2.726s 00:07:28.383 ************************************ 00:07:28.383 END TEST accel 00:07:28.383 ************************************ 00:07:28.383 05:54:20 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.383 05:54:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.383 05:54:20 -- common/autotest_common.sh@1142 -- # return 0 00:07:28.383 05:54:20 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:28.383 05:54:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.383 05:54:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.383 05:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:28.383 ************************************ 00:07:28.383 START TEST accel_rpc 00:07:28.383 ************************************ 00:07:28.383 05:54:20 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:28.642 * Looking for test storage... 00:07:28.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:28.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.642 05:54:20 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:28.642 05:54:20 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=74008 00:07:28.642 05:54:20 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 74008 00:07:28.642 05:54:20 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 74008 ']' 00:07:28.642 05:54:20 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.642 05:54:20 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:28.642 05:54:20 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.643 05:54:20 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.643 05:54:20 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.643 05:54:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.643 [2024-07-13 05:54:20.196826] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:28.643 [2024-07-13 05:54:20.196941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74008 ] 00:07:28.643 [2024-07-13 05:54:20.330582] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.643 [2024-07-13 05:54:20.365971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.902 05:54:20 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.902 05:54:20 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:28.902 05:54:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:28.902 05:54:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:28.902 05:54:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:28.902 05:54:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:28.902 05:54:20 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:28.902 05:54:20 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.902 05:54:20 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.902 05:54:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.902 ************************************ 00:07:28.902 START TEST accel_assign_opcode 00:07:28.902 ************************************ 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:28.902 [2024-07-13 05:54:20.426514] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:28.902 [2024-07-13 05:54:20.438501] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:28.902 [2024-07-13 05:54:20.479219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:28.902 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.162 software 00:07:29.162 ************************************ 00:07:29.162 END TEST accel_assign_opcode 00:07:29.162 ************************************ 00:07:29.162 00:07:29.162 real 0m0.212s 00:07:29.162 user 0m0.053s 00:07:29.162 sys 0m0.018s 00:07:29.162 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.162 05:54:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:29.162 05:54:20 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:29.162 05:54:20 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 74008 00:07:29.162 05:54:20 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 74008 ']' 00:07:29.162 05:54:20 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 74008 00:07:29.162 05:54:20 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:29.162 05:54:20 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.162 05:54:20 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74008 00:07:29.162 killing process with pid 74008 00:07:29.162 05:54:20 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.162 05:54:20 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.162 05:54:20 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74008' 00:07:29.162 05:54:20 accel_rpc -- common/autotest_common.sh@967 -- # kill 74008 00:07:29.162 05:54:20 accel_rpc -- common/autotest_common.sh@972 -- # wait 74008 00:07:29.421 00:07:29.421 real 0m0.886s 00:07:29.421 user 0m0.864s 00:07:29.421 sys 0m0.313s 00:07:29.421 05:54:20 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.421 ************************************ 00:07:29.421 END TEST accel_rpc 00:07:29.421 ************************************ 00:07:29.421 05:54:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.421 05:54:20 -- common/autotest_common.sh@1142 -- # return 0 00:07:29.421 05:54:20 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:29.421 05:54:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.421 05:54:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.421 05:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:29.421 ************************************ 00:07:29.421 START TEST app_cmdline 00:07:29.421 ************************************ 00:07:29.421 05:54:20 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:29.421 * Looking for test storage... 00:07:29.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:29.421 05:54:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:29.421 05:54:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=74087 00:07:29.421 05:54:21 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:29.421 05:54:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 74087 00:07:29.421 05:54:21 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 74087 ']' 00:07:29.421 05:54:21 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.421 05:54:21 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.421 05:54:21 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.421 05:54:21 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.421 05:54:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.421 [2024-07-13 05:54:21.118814] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:29.421 [2024-07-13 05:54:21.118908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74087 ] 00:07:29.681 [2024-07-13 05:54:21.256002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.681 [2024-07-13 05:54:21.296481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.681 [2024-07-13 05:54:21.327576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.939 05:54:21 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.939 05:54:21 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:29.939 05:54:21 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:30.198 { 00:07:30.198 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:07:30.198 "fields": { 00:07:30.198 "major": 24, 00:07:30.198 "minor": 9, 00:07:30.198 "patch": 0, 00:07:30.198 "suffix": "-pre", 00:07:30.198 "commit": "719d03c6a" 00:07:30.198 } 00:07:30.198 } 00:07:30.198 05:54:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:30.198 05:54:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:30.198 05:54:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:30.198 05:54:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:30.198 05:54:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:30.198 05:54:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:30.198 05:54:21 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.198 05:54:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:30.198 05:54:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:30.198 05:54:21 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.198 05:54:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:30.198 05:54:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:30.198 05:54:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.198 05:54:21 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:30.199 05:54:21 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.199 05:54:21 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.199 05:54:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.199 05:54:21 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.199 05:54:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.199 05:54:21 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.199 05:54:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.199 05:54:21 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.199 05:54:21 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:30.199 05:54:21 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.458 request: 00:07:30.458 { 00:07:30.458 "method": "env_dpdk_get_mem_stats", 00:07:30.458 "req_id": 1 00:07:30.458 } 00:07:30.458 Got JSON-RPC error response 00:07:30.458 response: 00:07:30.458 { 00:07:30.458 "code": -32601, 00:07:30.458 "message": "Method not found" 00:07:30.458 } 00:07:30.458 05:54:21 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:30.458 05:54:21 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:30.458 05:54:21 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:30.458 05:54:21 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:30.458 05:54:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 74087 00:07:30.458 05:54:21 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 74087 ']' 00:07:30.458 05:54:21 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 74087 00:07:30.458 05:54:21 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:30.458 05:54:21 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.458 05:54:21 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74087 00:07:30.458 05:54:21 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:30.458 05:54:21 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:30.458 killing process with pid 74087 00:07:30.458 05:54:21 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74087' 00:07:30.458 05:54:21 app_cmdline -- common/autotest_common.sh@967 -- # kill 74087 00:07:30.458 05:54:21 app_cmdline -- common/autotest_common.sh@972 -- # wait 74087 00:07:30.718 ************************************ 00:07:30.718 END TEST app_cmdline 00:07:30.718 ************************************ 00:07:30.718 00:07:30.718 real 0m1.246s 00:07:30.718 user 0m1.684s 00:07:30.718 sys 0m0.286s 00:07:30.718 05:54:22 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.718 05:54:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:30.718 05:54:22 -- common/autotest_common.sh@1142 -- # return 0 00:07:30.718 05:54:22 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:30.718 05:54:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.718 05:54:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.718 05:54:22 -- common/autotest_common.sh@10 -- # set +x 00:07:30.718 ************************************ 00:07:30.718 START TEST version 00:07:30.718 ************************************ 00:07:30.718 05:54:22 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:30.718 * Looking for test storage... 00:07:30.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:30.718 05:54:22 version -- app/version.sh@17 -- # get_header_version major 00:07:30.718 05:54:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:30.718 05:54:22 version -- app/version.sh@14 -- # cut -f2 00:07:30.718 05:54:22 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.718 05:54:22 version -- app/version.sh@17 -- # major=24 00:07:30.718 05:54:22 version -- app/version.sh@18 -- # get_header_version minor 00:07:30.718 05:54:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:30.718 05:54:22 version -- app/version.sh@14 -- # cut -f2 00:07:30.718 05:54:22 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.718 05:54:22 version -- app/version.sh@18 -- # minor=9 00:07:30.718 05:54:22 version -- app/version.sh@19 -- # get_header_version patch 00:07:30.718 05:54:22 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.718 05:54:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:30.718 05:54:22 version -- app/version.sh@14 -- # cut -f2 00:07:30.718 05:54:22 version -- app/version.sh@19 -- # patch=0 00:07:30.718 05:54:22 version -- app/version.sh@20 -- # get_header_version suffix 00:07:30.718 05:54:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:30.718 05:54:22 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.718 05:54:22 version -- app/version.sh@14 -- # cut -f2 00:07:30.718 05:54:22 version -- app/version.sh@20 -- # suffix=-pre 00:07:30.718 05:54:22 version -- app/version.sh@22 -- # version=24.9 00:07:30.718 05:54:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:30.718 05:54:22 version -- app/version.sh@28 -- # version=24.9rc0 00:07:30.718 05:54:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:30.718 05:54:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:30.718 05:54:22 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:30.718 05:54:22 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:30.718 00:07:30.718 real 0m0.152s 00:07:30.718 user 0m0.083s 00:07:30.718 sys 0m0.099s 00:07:30.718 05:54:22 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.718 ************************************ 00:07:30.718 END TEST version 00:07:30.718 ************************************ 00:07:30.718 05:54:22 version -- common/autotest_common.sh@10 -- # set +x 00:07:30.977 05:54:22 -- common/autotest_common.sh@1142 -- # return 0 00:07:30.978 05:54:22 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:30.978 05:54:22 -- spdk/autotest.sh@198 -- # uname -s 00:07:30.978 05:54:22 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:30.978 05:54:22 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:30.978 05:54:22 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:07:30.978 05:54:22 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:07:30.978 05:54:22 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:30.978 05:54:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.978 05:54:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.978 05:54:22 -- common/autotest_common.sh@10 -- # set +x 00:07:30.978 ************************************ 00:07:30.978 START TEST spdk_dd 00:07:30.978 ************************************ 00:07:30.978 05:54:22 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:30.978 * Looking for test storage... 00:07:30.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:30.978 05:54:22 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.978 05:54:22 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.978 05:54:22 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.978 05:54:22 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.978 05:54:22 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.978 05:54:22 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.978 05:54:22 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.978 05:54:22 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:30.978 05:54:22 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.978 05:54:22 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:31.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:31.237 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:31.237 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:31.237 05:54:22 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:31.237 05:54:22 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:31.237 05:54:22 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:07:31.237 05:54:22 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:07:31.237 05:54:22 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:07:31.237 05:54:22 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:31.237 05:54:22 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:07:31.237 05:54:22 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:07:31.237 05:54:22 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:07:31.237 05:54:22 spdk_dd -- scripts/common.sh@230 -- # local class 00:07:31.237 05:54:22 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:07:31.237 05:54:22 spdk_dd -- scripts/common.sh@232 -- # local progif 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@233 -- # class=01 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@15 -- # local i 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@24 -- # return 0 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@15 -- # local i 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@24 -- # return 0 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:07:31.499 05:54:22 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:07:31.500 05:54:22 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:31.500 05:54:22 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:07:31.500 05:54:22 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:07:31.500 05:54:22 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:07:31.500 05:54:22 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:07:31.500 05:54:22 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:31.500 05:54:22 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:07:31.500 05:54:22 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:07:31.500 05:54:22 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:07:31.500 05:54:22 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:07:31.500 05:54:22 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:31.500 05:54:22 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:31.500 05:54:22 spdk_dd -- dd/common.sh@139 -- # local lib so 00:07:31.500 05:54:22 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:31.500 05:54:22 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:22 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.500 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:31.501 * spdk_dd linked to liburing 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:31.501 05:54:23 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:31.501 05:54:23 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:31.502 05:54:23 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:31.502 05:54:23 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:31.502 05:54:23 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:31.502 05:54:23 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:31.502 05:54:23 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:31.502 05:54:23 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:31.502 05:54:23 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:31.502 05:54:23 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:31.502 05:54:23 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:31.502 05:54:23 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:31.502 05:54:23 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:31.502 05:54:23 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:31.502 05:54:23 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:07:31.502 05:54:23 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:31.502 05:54:23 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:07:31.502 05:54:23 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:07:31.502 05:54:23 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:07:31.502 05:54:23 spdk_dd -- dd/common.sh@157 -- # return 0 00:07:31.502 05:54:23 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:31.502 05:54:23 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:31.502 05:54:23 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:31.502 05:54:23 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.502 05:54:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:31.502 ************************************ 00:07:31.502 START TEST spdk_dd_basic_rw 00:07:31.502 ************************************ 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:31.502 * Looking for test storage... 00:07:31.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:31.502 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:31.763 ************************************ 00:07:31.763 START TEST dd_bs_lt_native_bs 00:07:31.763 ************************************ 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:07:31.763 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:31.764 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.764 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.764 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.764 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.764 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.764 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.764 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.764 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:31.764 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:31.764 { 00:07:31.764 "subsystems": [ 00:07:31.764 { 00:07:31.764 "subsystem": "bdev", 00:07:31.764 "config": [ 00:07:31.764 { 00:07:31.764 "params": { 00:07:31.764 "trtype": "pcie", 00:07:31.764 "traddr": "0000:00:10.0", 00:07:31.764 "name": "Nvme0" 00:07:31.764 }, 00:07:31.764 "method": "bdev_nvme_attach_controller" 00:07:31.764 }, 00:07:31.764 { 00:07:31.764 "method": "bdev_wait_for_examine" 00:07:31.764 } 00:07:31.764 ] 00:07:31.764 } 00:07:31.764 ] 00:07:31.764 } 00:07:31.764 [2024-07-13 05:54:23.418834] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:31.764 [2024-07-13 05:54:23.418944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74401 ] 00:07:32.022 [2024-07-13 05:54:23.558210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.022 [2024-07-13 05:54:23.601866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.022 [2024-07-13 05:54:23.637359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.022 [2024-07-13 05:54:23.727275] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:32.022 [2024-07-13 05:54:23.727343] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.280 [2024-07-13 05:54:23.799506] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:32.280 00:07:32.280 real 0m0.517s 00:07:32.280 user 0m0.366s 00:07:32.280 sys 0m0.124s 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:32.280 ************************************ 00:07:32.280 END TEST dd_bs_lt_native_bs 00:07:32.280 ************************************ 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.280 ************************************ 00:07:32.280 START TEST dd_rw 00:07:32.280 ************************************ 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:32.280 05:54:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.848 05:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:32.848 05:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:32.848 05:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.848 05:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.848 [2024-07-13 05:54:24.570681] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:32.848 [2024-07-13 05:54:24.570809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74432 ] 00:07:32.848 { 00:07:32.848 "subsystems": [ 00:07:32.848 { 00:07:32.848 "subsystem": "bdev", 00:07:32.848 "config": [ 00:07:32.848 { 00:07:32.848 "params": { 00:07:32.848 "trtype": "pcie", 00:07:32.848 "traddr": "0000:00:10.0", 00:07:32.848 "name": "Nvme0" 00:07:32.848 }, 00:07:32.848 "method": "bdev_nvme_attach_controller" 00:07:32.848 }, 00:07:32.848 { 00:07:32.848 "method": "bdev_wait_for_examine" 00:07:32.848 } 00:07:32.848 ] 00:07:32.848 } 00:07:32.848 ] 00:07:32.848 } 00:07:33.107 [2024-07-13 05:54:24.710930] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.107 [2024-07-13 05:54:24.750307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.107 [2024-07-13 05:54:24.782143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.367  Copying: 60/60 [kB] (average 29 MBps) 00:07:33.367 00:07:33.367 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:33.367 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:33.367 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:33.367 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.367 [2024-07-13 05:54:25.081841] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:33.367 [2024-07-13 05:54:25.081977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74440 ] 00:07:33.367 { 00:07:33.367 "subsystems": [ 00:07:33.367 { 00:07:33.367 "subsystem": "bdev", 00:07:33.367 "config": [ 00:07:33.367 { 00:07:33.367 "params": { 00:07:33.367 "trtype": "pcie", 00:07:33.367 "traddr": "0000:00:10.0", 00:07:33.367 "name": "Nvme0" 00:07:33.367 }, 00:07:33.367 "method": "bdev_nvme_attach_controller" 00:07:33.367 }, 00:07:33.367 { 00:07:33.367 "method": "bdev_wait_for_examine" 00:07:33.367 } 00:07:33.367 ] 00:07:33.367 } 00:07:33.367 ] 00:07:33.367 } 00:07:33.626 [2024-07-13 05:54:25.220054] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.626 [2024-07-13 05:54:25.259450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.626 [2024-07-13 05:54:25.289540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.883  Copying: 60/60 [kB] (average 19 MBps) 00:07:33.883 00:07:33.883 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.883 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:33.883 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:33.883 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:33.883 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:33.883 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:33.883 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:33.883 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:33.883 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:33.883 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:33.883 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.883 { 00:07:33.883 "subsystems": [ 00:07:33.883 { 00:07:33.883 "subsystem": "bdev", 00:07:33.883 "config": [ 00:07:33.883 { 00:07:33.883 "params": { 00:07:33.883 "trtype": "pcie", 00:07:33.883 "traddr": "0000:00:10.0", 00:07:33.883 "name": "Nvme0" 00:07:33.883 }, 00:07:33.883 "method": "bdev_nvme_attach_controller" 00:07:33.883 }, 00:07:33.883 { 00:07:33.883 "method": "bdev_wait_for_examine" 00:07:33.883 } 00:07:33.883 ] 00:07:33.883 } 00:07:33.883 ] 00:07:33.883 } 00:07:33.883 [2024-07-13 05:54:25.570660] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:33.884 [2024-07-13 05:54:25.570768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74461 ] 00:07:34.142 [2024-07-13 05:54:25.707030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.142 [2024-07-13 05:54:25.743177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.142 [2024-07-13 05:54:25.774524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:34.400  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:34.400 00:07:34.400 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:34.400 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:34.400 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:34.400 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:34.400 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:34.400 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:34.400 05:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.966 05:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:34.966 05:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:34.966 05:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:34.966 05:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.966 [2024-07-13 05:54:26.621795] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:34.966 [2024-07-13 05:54:26.621920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74480 ] 00:07:34.966 { 00:07:34.966 "subsystems": [ 00:07:34.966 { 00:07:34.966 "subsystem": "bdev", 00:07:34.966 "config": [ 00:07:34.966 { 00:07:34.966 "params": { 00:07:34.966 "trtype": "pcie", 00:07:34.966 "traddr": "0000:00:10.0", 00:07:34.966 "name": "Nvme0" 00:07:34.966 }, 00:07:34.966 "method": "bdev_nvme_attach_controller" 00:07:34.966 }, 00:07:34.966 { 00:07:34.966 "method": "bdev_wait_for_examine" 00:07:34.966 } 00:07:34.966 ] 00:07:34.966 } 00:07:34.966 ] 00:07:34.966 } 00:07:35.226 [2024-07-13 05:54:26.760228] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.226 [2024-07-13 05:54:26.799191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.226 [2024-07-13 05:54:26.829272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:35.483  Copying: 60/60 [kB] (average 58 MBps) 00:07:35.483 00:07:35.483 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:35.483 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:35.483 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:35.483 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.484 [2024-07-13 05:54:27.109078] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:35.484 [2024-07-13 05:54:27.109177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74488 ] 00:07:35.484 { 00:07:35.484 "subsystems": [ 00:07:35.484 { 00:07:35.484 "subsystem": "bdev", 00:07:35.484 "config": [ 00:07:35.484 { 00:07:35.484 "params": { 00:07:35.484 "trtype": "pcie", 00:07:35.484 "traddr": "0000:00:10.0", 00:07:35.484 "name": "Nvme0" 00:07:35.484 }, 00:07:35.484 "method": "bdev_nvme_attach_controller" 00:07:35.484 }, 00:07:35.484 { 00:07:35.484 "method": "bdev_wait_for_examine" 00:07:35.484 } 00:07:35.484 ] 00:07:35.484 } 00:07:35.484 ] 00:07:35.484 } 00:07:35.741 [2024-07-13 05:54:27.244781] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.741 [2024-07-13 05:54:27.279753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.741 [2024-07-13 05:54:27.307984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.001  Copying: 60/60 [kB] (average 58 MBps) 00:07:36.001 00:07:36.001 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.001 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:36.001 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:36.001 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:36.001 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:36.001 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:36.001 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:36.001 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:36.001 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:36.001 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.001 05:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.001 { 00:07:36.001 "subsystems": [ 00:07:36.001 { 00:07:36.001 "subsystem": "bdev", 00:07:36.001 "config": [ 00:07:36.001 { 00:07:36.001 "params": { 00:07:36.001 "trtype": "pcie", 00:07:36.001 "traddr": "0000:00:10.0", 00:07:36.001 "name": "Nvme0" 00:07:36.001 }, 00:07:36.001 "method": "bdev_nvme_attach_controller" 00:07:36.001 }, 00:07:36.001 { 00:07:36.001 "method": "bdev_wait_for_examine" 00:07:36.001 } 00:07:36.001 ] 00:07:36.001 } 00:07:36.001 ] 00:07:36.001 } 00:07:36.001 [2024-07-13 05:54:27.599281] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:36.001 [2024-07-13 05:54:27.599419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74509 ] 00:07:36.259 [2024-07-13 05:54:27.738974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.259 [2024-07-13 05:54:27.773142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.259 [2024-07-13 05:54:27.803461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.517  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:36.517 00:07:36.517 05:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:36.517 05:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:36.517 05:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:36.517 05:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:36.517 05:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:36.517 05:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:36.517 05:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:36.517 05:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.083 05:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:37.083 05:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:37.083 05:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.083 05:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.083 [2024-07-13 05:54:28.651500] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:37.083 [2024-07-13 05:54:28.651604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74528 ] 00:07:37.083 { 00:07:37.083 "subsystems": [ 00:07:37.083 { 00:07:37.083 "subsystem": "bdev", 00:07:37.083 "config": [ 00:07:37.083 { 00:07:37.083 "params": { 00:07:37.083 "trtype": "pcie", 00:07:37.083 "traddr": "0000:00:10.0", 00:07:37.083 "name": "Nvme0" 00:07:37.083 }, 00:07:37.083 "method": "bdev_nvme_attach_controller" 00:07:37.083 }, 00:07:37.083 { 00:07:37.083 "method": "bdev_wait_for_examine" 00:07:37.083 } 00:07:37.083 ] 00:07:37.083 } 00:07:37.083 ] 00:07:37.083 } 00:07:37.083 [2024-07-13 05:54:28.787725] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.341 [2024-07-13 05:54:28.823071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.341 [2024-07-13 05:54:28.853013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:37.599  Copying: 56/56 [kB] (average 54 MBps) 00:07:37.599 00:07:37.599 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:37.599 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:37.599 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.599 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.599 [2024-07-13 05:54:29.147707] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:37.599 [2024-07-13 05:54:29.147830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74536 ] 00:07:37.599 { 00:07:37.599 "subsystems": [ 00:07:37.599 { 00:07:37.599 "subsystem": "bdev", 00:07:37.599 "config": [ 00:07:37.599 { 00:07:37.599 "params": { 00:07:37.599 "trtype": "pcie", 00:07:37.599 "traddr": "0000:00:10.0", 00:07:37.599 "name": "Nvme0" 00:07:37.599 }, 00:07:37.599 "method": "bdev_nvme_attach_controller" 00:07:37.599 }, 00:07:37.599 { 00:07:37.599 "method": "bdev_wait_for_examine" 00:07:37.599 } 00:07:37.599 ] 00:07:37.599 } 00:07:37.599 ] 00:07:37.599 } 00:07:37.599 [2024-07-13 05:54:29.286124] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.599 [2024-07-13 05:54:29.321345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.858 [2024-07-13 05:54:29.354646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:37.858  Copying: 56/56 [kB] (average 27 MBps) 00:07:37.858 00:07:37.858 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.117 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:38.117 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:38.117 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:38.117 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:38.117 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:38.117 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:38.117 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:38.117 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:38.117 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.117 05:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.117 [2024-07-13 05:54:29.637309] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:38.117 [2024-07-13 05:54:29.638055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74557 ] 00:07:38.117 { 00:07:38.117 "subsystems": [ 00:07:38.117 { 00:07:38.117 "subsystem": "bdev", 00:07:38.117 "config": [ 00:07:38.117 { 00:07:38.117 "params": { 00:07:38.117 "trtype": "pcie", 00:07:38.117 "traddr": "0000:00:10.0", 00:07:38.117 "name": "Nvme0" 00:07:38.117 }, 00:07:38.117 "method": "bdev_nvme_attach_controller" 00:07:38.117 }, 00:07:38.118 { 00:07:38.118 "method": "bdev_wait_for_examine" 00:07:38.118 } 00:07:38.118 ] 00:07:38.118 } 00:07:38.118 ] 00:07:38.118 } 00:07:38.118 [2024-07-13 05:54:29.778339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.118 [2024-07-13 05:54:29.812101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.118 [2024-07-13 05:54:29.840755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.376  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:38.376 00:07:38.376 05:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:38.376 05:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:38.376 05:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:38.376 05:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:38.376 05:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:38.376 05:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:38.377 05:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.943 05:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:38.943 05:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:38.943 05:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.943 05:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.943 [2024-07-13 05:54:30.641427] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:38.943 [2024-07-13 05:54:30.641765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74576 ] 00:07:38.943 { 00:07:38.943 "subsystems": [ 00:07:38.943 { 00:07:38.943 "subsystem": "bdev", 00:07:38.943 "config": [ 00:07:38.943 { 00:07:38.943 "params": { 00:07:38.943 "trtype": "pcie", 00:07:38.943 "traddr": "0000:00:10.0", 00:07:38.943 "name": "Nvme0" 00:07:38.943 }, 00:07:38.943 "method": "bdev_nvme_attach_controller" 00:07:38.943 }, 00:07:38.943 { 00:07:38.943 "method": "bdev_wait_for_examine" 00:07:38.943 } 00:07:38.943 ] 00:07:38.943 } 00:07:38.943 ] 00:07:38.943 } 00:07:39.202 [2024-07-13 05:54:30.773864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.202 [2024-07-13 05:54:30.808099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.202 [2024-07-13 05:54:30.837053] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:39.460  Copying: 56/56 [kB] (average 54 MBps) 00:07:39.460 00:07:39.460 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:39.460 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:39.460 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.460 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.460 [2024-07-13 05:54:31.113997] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:39.460 [2024-07-13 05:54:31.114088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74584 ] 00:07:39.460 { 00:07:39.460 "subsystems": [ 00:07:39.460 { 00:07:39.460 "subsystem": "bdev", 00:07:39.460 "config": [ 00:07:39.460 { 00:07:39.460 "params": { 00:07:39.460 "trtype": "pcie", 00:07:39.460 "traddr": "0000:00:10.0", 00:07:39.460 "name": "Nvme0" 00:07:39.460 }, 00:07:39.460 "method": "bdev_nvme_attach_controller" 00:07:39.460 }, 00:07:39.460 { 00:07:39.460 "method": "bdev_wait_for_examine" 00:07:39.460 } 00:07:39.460 ] 00:07:39.460 } 00:07:39.460 ] 00:07:39.460 } 00:07:39.718 [2024-07-13 05:54:31.249307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.718 [2024-07-13 05:54:31.283672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.718 [2024-07-13 05:54:31.312005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:39.977  Copying: 56/56 [kB] (average 54 MBps) 00:07:39.977 00:07:39.977 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.977 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:39.977 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:39.977 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:39.977 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:39.977 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:39.977 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:39.977 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:39.977 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:39.978 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.978 05:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.978 [2024-07-13 05:54:31.587044] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:39.978 [2024-07-13 05:54:31.587108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74604 ] 00:07:39.978 { 00:07:39.978 "subsystems": [ 00:07:39.978 { 00:07:39.978 "subsystem": "bdev", 00:07:39.978 "config": [ 00:07:39.978 { 00:07:39.978 "params": { 00:07:39.978 "trtype": "pcie", 00:07:39.978 "traddr": "0000:00:10.0", 00:07:39.978 "name": "Nvme0" 00:07:39.978 }, 00:07:39.978 "method": "bdev_nvme_attach_controller" 00:07:39.978 }, 00:07:39.978 { 00:07:39.978 "method": "bdev_wait_for_examine" 00:07:39.978 } 00:07:39.978 ] 00:07:39.978 } 00:07:39.978 ] 00:07:39.978 } 00:07:40.236 [2024-07-13 05:54:31.719986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.236 [2024-07-13 05:54:31.757948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.236 [2024-07-13 05:54:31.790870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.494  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:40.494 00:07:40.495 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:40.495 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:40.495 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:40.495 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:40.495 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:40.495 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:40.495 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:40.495 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.062 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:41.062 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:41.062 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.062 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.062 [2024-07-13 05:54:32.534731] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:41.062 [2024-07-13 05:54:32.534826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74613 ] 00:07:41.062 { 00:07:41.062 "subsystems": [ 00:07:41.062 { 00:07:41.062 "subsystem": "bdev", 00:07:41.062 "config": [ 00:07:41.062 { 00:07:41.062 "params": { 00:07:41.062 "trtype": "pcie", 00:07:41.062 "traddr": "0000:00:10.0", 00:07:41.062 "name": "Nvme0" 00:07:41.062 }, 00:07:41.062 "method": "bdev_nvme_attach_controller" 00:07:41.062 }, 00:07:41.062 { 00:07:41.062 "method": "bdev_wait_for_examine" 00:07:41.062 } 00:07:41.062 ] 00:07:41.062 } 00:07:41.062 ] 00:07:41.062 } 00:07:41.062 [2024-07-13 05:54:32.671794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.062 [2024-07-13 05:54:32.706756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.062 [2024-07-13 05:54:32.737851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.320  Copying: 48/48 [kB] (average 46 MBps) 00:07:41.320 00:07:41.320 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:41.320 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:41.320 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.320 05:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.320 [2024-07-13 05:54:33.027674] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:41.320 [2024-07-13 05:54:33.027770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74632 ] 00:07:41.320 { 00:07:41.320 "subsystems": [ 00:07:41.320 { 00:07:41.320 "subsystem": "bdev", 00:07:41.320 "config": [ 00:07:41.320 { 00:07:41.320 "params": { 00:07:41.320 "trtype": "pcie", 00:07:41.320 "traddr": "0000:00:10.0", 00:07:41.320 "name": "Nvme0" 00:07:41.320 }, 00:07:41.320 "method": "bdev_nvme_attach_controller" 00:07:41.320 }, 00:07:41.320 { 00:07:41.320 "method": "bdev_wait_for_examine" 00:07:41.320 } 00:07:41.320 ] 00:07:41.320 } 00:07:41.320 ] 00:07:41.320 } 00:07:41.578 [2024-07-13 05:54:33.160315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.578 [2024-07-13 05:54:33.195865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.578 [2024-07-13 05:54:33.224299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.838  Copying: 48/48 [kB] (average 23 MBps) 00:07:41.838 00:07:41.838 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.838 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:41.838 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:41.838 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:41.838 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:41.838 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:41.838 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:41.838 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:41.838 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:41.838 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.838 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.838 [2024-07-13 05:54:33.491173] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:41.838 [2024-07-13 05:54:33.491259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74642 ] 00:07:41.838 { 00:07:41.838 "subsystems": [ 00:07:41.838 { 00:07:41.838 "subsystem": "bdev", 00:07:41.838 "config": [ 00:07:41.838 { 00:07:41.838 "params": { 00:07:41.838 "trtype": "pcie", 00:07:41.838 "traddr": "0000:00:10.0", 00:07:41.838 "name": "Nvme0" 00:07:41.838 }, 00:07:41.838 "method": "bdev_nvme_attach_controller" 00:07:41.838 }, 00:07:41.838 { 00:07:41.838 "method": "bdev_wait_for_examine" 00:07:41.838 } 00:07:41.838 ] 00:07:41.838 } 00:07:41.838 ] 00:07:41.838 } 00:07:42.097 [2024-07-13 05:54:33.622250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.097 [2024-07-13 05:54:33.662286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.097 [2024-07-13 05:54:33.693182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:42.355  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:42.355 00:07:42.355 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:42.355 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:42.355 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:42.355 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:42.355 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:42.355 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:42.355 05:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.922 05:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:42.922 05:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:42.922 05:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:42.922 05:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.922 { 00:07:42.922 "subsystems": [ 00:07:42.922 { 00:07:42.922 "subsystem": "bdev", 00:07:42.922 "config": [ 00:07:42.922 { 00:07:42.922 "params": { 00:07:42.922 "trtype": "pcie", 00:07:42.922 "traddr": "0000:00:10.0", 00:07:42.922 "name": "Nvme0" 00:07:42.922 }, 00:07:42.922 "method": "bdev_nvme_attach_controller" 00:07:42.922 }, 00:07:42.922 { 00:07:42.922 "method": "bdev_wait_for_examine" 00:07:42.922 } 00:07:42.922 ] 00:07:42.922 } 00:07:42.922 ] 00:07:42.922 } 00:07:42.922 [2024-07-13 05:54:34.429556] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:42.922 [2024-07-13 05:54:34.429670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74661 ] 00:07:42.922 [2024-07-13 05:54:34.563609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.922 [2024-07-13 05:54:34.598891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.922 [2024-07-13 05:54:34.627646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.181  Copying: 48/48 [kB] (average 46 MBps) 00:07:43.181 00:07:43.181 05:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:43.181 05:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:43.181 05:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.181 05:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.181 [2024-07-13 05:54:34.905698] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:43.181 [2024-07-13 05:54:34.906472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74680 ] 00:07:43.440 { 00:07:43.440 "subsystems": [ 00:07:43.440 { 00:07:43.440 "subsystem": "bdev", 00:07:43.440 "config": [ 00:07:43.440 { 00:07:43.440 "params": { 00:07:43.440 "trtype": "pcie", 00:07:43.440 "traddr": "0000:00:10.0", 00:07:43.440 "name": "Nvme0" 00:07:43.440 }, 00:07:43.440 "method": "bdev_nvme_attach_controller" 00:07:43.440 }, 00:07:43.440 { 00:07:43.440 "method": "bdev_wait_for_examine" 00:07:43.440 } 00:07:43.440 ] 00:07:43.440 } 00:07:43.440 ] 00:07:43.440 } 00:07:43.440 [2024-07-13 05:54:35.038444] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.440 [2024-07-13 05:54:35.080238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.440 [2024-07-13 05:54:35.111321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.700  Copying: 48/48 [kB] (average 46 MBps) 00:07:43.700 00:07:43.700 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.700 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:43.700 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:43.700 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:43.700 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:43.700 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:43.700 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:43.700 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:43.700 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:43.700 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.700 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.700 [2024-07-13 05:54:35.390002] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:43.700 [2024-07-13 05:54:35.390085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74690 ] 00:07:43.700 { 00:07:43.700 "subsystems": [ 00:07:43.700 { 00:07:43.700 "subsystem": "bdev", 00:07:43.700 "config": [ 00:07:43.700 { 00:07:43.700 "params": { 00:07:43.700 "trtype": "pcie", 00:07:43.700 "traddr": "0000:00:10.0", 00:07:43.700 "name": "Nvme0" 00:07:43.700 }, 00:07:43.700 "method": "bdev_nvme_attach_controller" 00:07:43.700 }, 00:07:43.700 { 00:07:43.700 "method": "bdev_wait_for_examine" 00:07:43.700 } 00:07:43.700 ] 00:07:43.700 } 00:07:43.700 ] 00:07:43.700 } 00:07:43.958 [2024-07-13 05:54:35.524279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.958 [2024-07-13 05:54:35.558321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.958 [2024-07-13 05:54:35.590510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:44.217  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:44.217 00:07:44.217 ************************************ 00:07:44.217 END TEST dd_rw 00:07:44.217 ************************************ 00:07:44.217 00:07:44.217 real 0m11.900s 00:07:44.217 user 0m8.862s 00:07:44.217 sys 0m3.720s 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.217 ************************************ 00:07:44.217 START TEST dd_rw_offset 00:07:44.217 ************************************ 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=1eq8h26x6m06hmc1vb024udnlynqll1fjnycca4obkugyyjyutnin9c2gm75ubxt6f6wbhn1uifjmueeief6ev3l1jbtpebjbx5ewcj39ahevmukffmeid9b6ldirplkmxjn02m07q7m7enfut70z6qwvayes7zgj8n5watprw8zgt5br0yob18u7z0ie034vdykh858fv2156irnj9gseu8tnmcpesya2r74gms1cqo3oadbp6d779vxi3z17pwip8yy3fg8bpnrs8y84uzb4yqo9fk9966kw0yso40b2esn5ry3tc572yifi2a1bcwiapne4f5tmzj4x35l1k9q4osq42v22uf9opc7eytjlgo6hh834mf97z4kn29dtq8s9pe91irkxr9xiivzqamtbi2tja5ydljxvmy6x572gqhnw1e61lfddn1n2mg9or7glvolufzuq0j05uyau7e2shsz637oq6rzi3vwkxgvul2v4yuvj8e2om0t16rvjfk9gig9fjrt38dzw6hwq4d2nbaiwz7fdv4m60cjfcpiw6uhgumjpjoehtgor2x6e7gd74spkohi0vk7z8frzdwv77g90hiannn5fv06rdy5nsyxsvrz7l0pt9m3xhba1goy3egvr3b0294x2i47aara0ou7mb39on12drh9cpwn5elf4xf2kuv3eskgm1906cpewemgyjgxvtdeiz9bj4duc9zu872tf1d8c49rlymxxjy78s65yjdlbcq46gfd4acwma2ncwuthyh800res8oyqoou1czpvotzv93w1q9hmmkp7lfrib2zvy76mkyapbkosytjpzzrcqbtahc1s29azuv5rs3ddc623c7fh5fog525o7z3rlau4m8ebij3n0l6fm704yb7lzy3hskoppitsd52whbdlcvo5m6b36jp6ibucvoukbfcuv6k9q8npr4cdwqgn852c5johzg8y4iidjaaghouma4nsgk9j8lv905xg94qa8av3kclxbertjnp6ayi2jfyuzyjg7io2iijhqoenzejuqb74n9c2rqjvkzmffbz5iv25ezldaxqiw2vbo7tdxu3yb42zpbz4zhq7i8b0n1mh0z10z5tjwqjpnp2lp7wphroihi3htohan285g48khxapcsxy2u1dnzekrnh8j9od52626geeid3na5shsppm518of18l9mz181nz5yiy61jvzwa6hg4zgvmi1hcf6ziiu034yxjqofqfjbp4a9qkt3lel7nkl2caws64iw8u3q0uehjw74rx5vzp8r5ydunbaay1g9a12rj7sjq4yi5md8i8tc70512kni0f1yno6rcz02dlpyo8l0y819sogsdlo6db41ufsqyny8vapk80fpsrx0xdgaczcvbe6kzfk1l9agdt11ib0n4ogwqb802ywsraxz2co2fmm5oks6hagzg97hzi05syfuxu3nrm3zd5lc9tp9oh24hj48f7dmdkjq1db9usdb54linzswym82sc5lghhvuv87aibjf5lw3o6m5fv39fxn06j519fkvmsa03kr7dfthwzveab1pbqmzovc5a1h0emch6nxlda28i2hdc06fhhbzdfshultivvlectntn3cqx46fh3a66j2xekkrh25ok2440nxi9u073gg4i1p723s3gtsdjdyvj4iazy2tm2jndwtc6u4exb6juiinvy6rbuat5xbm3fatilzgmtoct0ijfx8how0j4h8unj3c7avhwb8qp0fgcmed2qnaro5dyg7qv3rwrekl26mtghvxhdxeg1gm6tcsvq4qf36pc6r1s8ja5y7u51p1bbz4v9mnt310v18txiihi928htc8dn61c77xh4x7uxectxfhv9hn3jpycyou8bchxmylj7zonv7bhzh34obyvjecuys7b02c6n5xssmw4y6i22bd3niswbqy3xba1bht2vqkdplngjt5lqa82rmkiygo9l3926018dhonnxgfkjjw4hpeftfeztg1pm57qjh62i1acuqkebt1o9guiw8dpreqw1jovagx33bbxvqli9ht9vxs7xgptcbjrj314wanvmqbern0o3h5gj9vr3l7v49kldcpxke8arwjzodq2w8lu4ilo0v0e605rz3b2tezxff8tl92bb923vy2ke0r6aw05m0gjf56f22ptflq1nux3z5qk7admkghqfavxjaw7yfdi8hx1byj70ow6vhabblueqvxb8jmnaphm7341pc0ex1eijs9jqq6rb7zp8456uo9khu2imua9khbj7o5rdnthlwd4n2qo03xwsdx5oasus2fow39fnimg69iayh447s1fzf2e61k7z9onu87iah2p7p0pivsdebiyni5rfd0r1ilxztaxskk9spkf42iv6tydiiu4ke5eruh60s9jy32efvo151h3tzy83nrp9lkw5h0ytk39dp3dxza2grpdqfg3xonp1jr0lztaku6omngv0ild5w1q9dvlqi0mizba0s7bh79rk4tgmrl0oeqbx7p3g1qp0utngi0vhpk0bsdog3wm6sclgvxmkuqs4lbtn7ucszn3ojh8ulfwz6z42m47k13kxcl3kryrxfe8ncu3z9nwyc49uwpob61ab4fgjzyw0000fq7dxcognwngyqumxmhvq59q5cx568w6jxa69qch77dqh4b21v711hpmss3ywuacrsz6iy86xabzn867923upl2afftwotn724aqbqxw2lzu60e3f2oq9xdzrd58byq6makne3umryxzqgoiqaimgvv0w5lz5k3fjxq23b7q16nbnwwl9f382omk3amw4ha476id4lhmw6m0vzmcu3k92748welb8h65dv0vd3ufxn95bqlv3kiur0h1ddbhdh83ksr4w9hnexedy6ssoz2a4qsbaauxbz7kh064j82ulpvmqon296hwvyr3wnf1n73uof68yae8s35aptaqd5yq4zk4pkr6z4rfyfs3i9ds2k6ffqk0oevgze2ct2iygrqwhpte9kt3s6ulijy78ndw6cp261fqs6fo749cissi6f7pbgjz8yfdgqk4iyfy3ybbbvfm79agwgmtgag3zkzgu5j659i0jav4lr5vpdeduxkp613343qt0qi3ue4ed22g037vn9i9d47qvyh5qgr7j3abux0ry74l3n1jvaenc4bhzcaq0tch1kxi5kmi6w81v52zjvxphf90kxtkoblulekbgbfpfva52x5g0uvxpkk2q0votpx2hdirdd1jnirq0noq59ji4nn610o7wggny4rudd82sep4gtd4g9kn0bfcoysdumrlurlch2lyvcwr62chwa69whrbydwd60y2d1yvlz64bb615us2fuykac3tk7drg5c388ve5xr8bdp2ufp5uit1uw6tdnz0xxpo1ufyumgikjdvs0j8fokvphkgka159wb59e2fpmv5w3snjaww2abz27umf65eteinesj8x5uoztx0yaoi70nrf5j1vz4h5ehpvt8sxqdgf5foqrizchklzc71rpxof2b9xpnukl1pi289bdafmsgc6eyaq7lt62edokkgrzwgwd6eppxocte4ishbcz8xsb5uow1rj8y6r9d1xqby3kalkil6i1yfgswf1ynja6gelsrbb5bdye2g9soyohkr3i577iceat671arj9vluw616hllpmkevh2ox9pwkv5zk55qni7e9nm24dgh9bvv8236wroc0pjocwc6esrwardbfgvdtdooiswxai3shmf3oq4934k8rb00mo9ymnqvjnpgk5d8x7bikxikw9v19wrsf2ybco4tj0jamvpji9adq1f0nk11ulh4crrfzqyn4m0mdctcvfwdnphy8o0g0xy9lfz8v1ws1vpg6tj5sby86aszjxbvneecsim2dmpcmeekcu272jhdjv8i2xkf25uklvs2zz9tcegawz88pje9gcp9tzgw45700zde95lrd7fd8isgifotxymsf3a2toir700dy8v1qa39mzggxqoyfozhupsytd0kla4bukjxyyzvec 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:44.217 05:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:44.476 [2024-07-13 05:54:35.970019] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:44.476 [2024-07-13 05:54:35.970108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74726 ] 00:07:44.476 { 00:07:44.476 "subsystems": [ 00:07:44.476 { 00:07:44.476 "subsystem": "bdev", 00:07:44.476 "config": [ 00:07:44.476 { 00:07:44.476 "params": { 00:07:44.476 "trtype": "pcie", 00:07:44.476 "traddr": "0000:00:10.0", 00:07:44.476 "name": "Nvme0" 00:07:44.476 }, 00:07:44.476 "method": "bdev_nvme_attach_controller" 00:07:44.476 }, 00:07:44.476 { 00:07:44.476 "method": "bdev_wait_for_examine" 00:07:44.476 } 00:07:44.476 ] 00:07:44.476 } 00:07:44.476 ] 00:07:44.476 } 00:07:44.476 [2024-07-13 05:54:36.102892] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.476 [2024-07-13 05:54:36.136863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.476 [2024-07-13 05:54:36.165798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:44.736  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:44.736 00:07:44.736 05:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:44.736 05:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:44.736 05:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:44.736 05:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:44.995 [2024-07-13 05:54:36.464584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:44.995 [2024-07-13 05:54:36.464681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74734 ] 00:07:44.995 { 00:07:44.995 "subsystems": [ 00:07:44.995 { 00:07:44.995 "subsystem": "bdev", 00:07:44.995 "config": [ 00:07:44.995 { 00:07:44.995 "params": { 00:07:44.995 "trtype": "pcie", 00:07:44.995 "traddr": "0000:00:10.0", 00:07:44.995 "name": "Nvme0" 00:07:44.995 }, 00:07:44.995 "method": "bdev_nvme_attach_controller" 00:07:44.995 }, 00:07:44.995 { 00:07:44.995 "method": "bdev_wait_for_examine" 00:07:44.995 } 00:07:44.995 ] 00:07:44.995 } 00:07:44.995 ] 00:07:44.995 } 00:07:44.995 [2024-07-13 05:54:36.605363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.995 [2024-07-13 05:54:36.646796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.995 [2024-07-13 05:54:36.680924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.255  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:45.255 00:07:45.255 05:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:45.255 ************************************ 00:07:45.255 END TEST dd_rw_offset 00:07:45.255 ************************************ 00:07:45.256 05:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 1eq8h26x6m06hmc1vb024udnlynqll1fjnycca4obkugyyjyutnin9c2gm75ubxt6f6wbhn1uifjmueeief6ev3l1jbtpebjbx5ewcj39ahevmukffmeid9b6ldirplkmxjn02m07q7m7enfut70z6qwvayes7zgj8n5watprw8zgt5br0yob18u7z0ie034vdykh858fv2156irnj9gseu8tnmcpesya2r74gms1cqo3oadbp6d779vxi3z17pwip8yy3fg8bpnrs8y84uzb4yqo9fk9966kw0yso40b2esn5ry3tc572yifi2a1bcwiapne4f5tmzj4x35l1k9q4osq42v22uf9opc7eytjlgo6hh834mf97z4kn29dtq8s9pe91irkxr9xiivzqamtbi2tja5ydljxvmy6x572gqhnw1e61lfddn1n2mg9or7glvolufzuq0j05uyau7e2shsz637oq6rzi3vwkxgvul2v4yuvj8e2om0t16rvjfk9gig9fjrt38dzw6hwq4d2nbaiwz7fdv4m60cjfcpiw6uhgumjpjoehtgor2x6e7gd74spkohi0vk7z8frzdwv77g90hiannn5fv06rdy5nsyxsvrz7l0pt9m3xhba1goy3egvr3b0294x2i47aara0ou7mb39on12drh9cpwn5elf4xf2kuv3eskgm1906cpewemgyjgxvtdeiz9bj4duc9zu872tf1d8c49rlymxxjy78s65yjdlbcq46gfd4acwma2ncwuthyh800res8oyqoou1czpvotzv93w1q9hmmkp7lfrib2zvy76mkyapbkosytjpzzrcqbtahc1s29azuv5rs3ddc623c7fh5fog525o7z3rlau4m8ebij3n0l6fm704yb7lzy3hskoppitsd52whbdlcvo5m6b36jp6ibucvoukbfcuv6k9q8npr4cdwqgn852c5johzg8y4iidjaaghouma4nsgk9j8lv905xg94qa8av3kclxbertjnp6ayi2jfyuzyjg7io2iijhqoenzejuqb74n9c2rqjvkzmffbz5iv25ezldaxqiw2vbo7tdxu3yb42zpbz4zhq7i8b0n1mh0z10z5tjwqjpnp2lp7wphroihi3htohan285g48khxapcsxy2u1dnzekrnh8j9od52626geeid3na5shsppm518of18l9mz181nz5yiy61jvzwa6hg4zgvmi1hcf6ziiu034yxjqofqfjbp4a9qkt3lel7nkl2caws64iw8u3q0uehjw74rx5vzp8r5ydunbaay1g9a12rj7sjq4yi5md8i8tc70512kni0f1yno6rcz02dlpyo8l0y819sogsdlo6db41ufsqyny8vapk80fpsrx0xdgaczcvbe6kzfk1l9agdt11ib0n4ogwqb802ywsraxz2co2fmm5oks6hagzg97hzi05syfuxu3nrm3zd5lc9tp9oh24hj48f7dmdkjq1db9usdb54linzswym82sc5lghhvuv87aibjf5lw3o6m5fv39fxn06j519fkvmsa03kr7dfthwzveab1pbqmzovc5a1h0emch6nxlda28i2hdc06fhhbzdfshultivvlectntn3cqx46fh3a66j2xekkrh25ok2440nxi9u073gg4i1p723s3gtsdjdyvj4iazy2tm2jndwtc6u4exb6juiinvy6rbuat5xbm3fatilzgmtoct0ijfx8how0j4h8unj3c7avhwb8qp0fgcmed2qnaro5dyg7qv3rwrekl26mtghvxhdxeg1gm6tcsvq4qf36pc6r1s8ja5y7u51p1bbz4v9mnt310v18txiihi928htc8dn61c77xh4x7uxectxfhv9hn3jpycyou8bchxmylj7zonv7bhzh34obyvjecuys7b02c6n5xssmw4y6i22bd3niswbqy3xba1bht2vqkdplngjt5lqa82rmkiygo9l3926018dhonnxgfkjjw4hpeftfeztg1pm57qjh62i1acuqkebt1o9guiw8dpreqw1jovagx33bbxvqli9ht9vxs7xgptcbjrj314wanvmqbern0o3h5gj9vr3l7v49kldcpxke8arwjzodq2w8lu4ilo0v0e605rz3b2tezxff8tl92bb923vy2ke0r6aw05m0gjf56f22ptflq1nux3z5qk7admkghqfavxjaw7yfdi8hx1byj70ow6vhabblueqvxb8jmnaphm7341pc0ex1eijs9jqq6rb7zp8456uo9khu2imua9khbj7o5rdnthlwd4n2qo03xwsdx5oasus2fow39fnimg69iayh447s1fzf2e61k7z9onu87iah2p7p0pivsdebiyni5rfd0r1ilxztaxskk9spkf42iv6tydiiu4ke5eruh60s9jy32efvo151h3tzy83nrp9lkw5h0ytk39dp3dxza2grpdqfg3xonp1jr0lztaku6omngv0ild5w1q9dvlqi0mizba0s7bh79rk4tgmrl0oeqbx7p3g1qp0utngi0vhpk0bsdog3wm6sclgvxmkuqs4lbtn7ucszn3ojh8ulfwz6z42m47k13kxcl3kryrxfe8ncu3z9nwyc49uwpob61ab4fgjzyw0000fq7dxcognwngyqumxmhvq59q5cx568w6jxa69qch77dqh4b21v711hpmss3ywuacrsz6iy86xabzn867923upl2afftwotn724aqbqxw2lzu60e3f2oq9xdzrd58byq6makne3umryxzqgoiqaimgvv0w5lz5k3fjxq23b7q16nbnwwl9f382omk3amw4ha476id4lhmw6m0vzmcu3k92748welb8h65dv0vd3ufxn95bqlv3kiur0h1ddbhdh83ksr4w9hnexedy6ssoz2a4qsbaauxbz7kh064j82ulpvmqon296hwvyr3wnf1n73uof68yae8s35aptaqd5yq4zk4pkr6z4rfyfs3i9ds2k6ffqk0oevgze2ct2iygrqwhpte9kt3s6ulijy78ndw6cp261fqs6fo749cissi6f7pbgjz8yfdgqk4iyfy3ybbbvfm79agwgmtgag3zkzgu5j659i0jav4lr5vpdeduxkp613343qt0qi3ue4ed22g037vn9i9d47qvyh5qgr7j3abux0ry74l3n1jvaenc4bhzcaq0tch1kxi5kmi6w81v52zjvxphf90kxtkoblulekbgbfpfva52x5g0uvxpkk2q0votpx2hdirdd1jnirq0noq59ji4nn610o7wggny4rudd82sep4gtd4g9kn0bfcoysdumrlurlch2lyvcwr62chwa69whrbydwd60y2d1yvlz64bb615us2fuykac3tk7drg5c388ve5xr8bdp2ufp5uit1uw6tdnz0xxpo1ufyumgikjdvs0j8fokvphkgka159wb59e2fpmv5w3snjaww2abz27umf65eteinesj8x5uoztx0yaoi70nrf5j1vz4h5ehpvt8sxqdgf5foqrizchklzc71rpxof2b9xpnukl1pi289bdafmsgc6eyaq7lt62edokkgrzwgwd6eppxocte4ishbcz8xsb5uow1rj8y6r9d1xqby3kalkil6i1yfgswf1ynja6gelsrbb5bdye2g9soyohkr3i577iceat671arj9vluw616hllpmkevh2ox9pwkv5zk55qni7e9nm24dgh9bvv8236wroc0pjocwc6esrwardbfgvdtdooiswxai3shmf3oq4934k8rb00mo9ymnqvjnpgk5d8x7bikxikw9v19wrsf2ybco4tj0jamvpji9adq1f0nk11ulh4crrfzqyn4m0mdctcvfwdnphy8o0g0xy9lfz8v1ws1vpg6tj5sby86aszjxbvneecsim2dmpcmeekcu272jhdjv8i2xkf25uklvs2zz9tcegawz88pje9gcp9tzgw45700zde95lrd7fd8isgifotxymsf3a2toir700dy8v1qa39mzggxqoyfozhupsytd0kla4bukjxyyzvec == \1\e\q\8\h\2\6\x\6\m\0\6\h\m\c\1\v\b\0\2\4\u\d\n\l\y\n\q\l\l\1\f\j\n\y\c\c\a\4\o\b\k\u\g\y\y\j\y\u\t\n\i\n\9\c\2\g\m\7\5\u\b\x\t\6\f\6\w\b\h\n\1\u\i\f\j\m\u\e\e\i\e\f\6\e\v\3\l\1\j\b\t\p\e\b\j\b\x\5\e\w\c\j\3\9\a\h\e\v\m\u\k\f\f\m\e\i\d\9\b\6\l\d\i\r\p\l\k\m\x\j\n\0\2\m\0\7\q\7\m\7\e\n\f\u\t\7\0\z\6\q\w\v\a\y\e\s\7\z\g\j\8\n\5\w\a\t\p\r\w\8\z\g\t\5\b\r\0\y\o\b\1\8\u\7\z\0\i\e\0\3\4\v\d\y\k\h\8\5\8\f\v\2\1\5\6\i\r\n\j\9\g\s\e\u\8\t\n\m\c\p\e\s\y\a\2\r\7\4\g\m\s\1\c\q\o\3\o\a\d\b\p\6\d\7\7\9\v\x\i\3\z\1\7\p\w\i\p\8\y\y\3\f\g\8\b\p\n\r\s\8\y\8\4\u\z\b\4\y\q\o\9\f\k\9\9\6\6\k\w\0\y\s\o\4\0\b\2\e\s\n\5\r\y\3\t\c\5\7\2\y\i\f\i\2\a\1\b\c\w\i\a\p\n\e\4\f\5\t\m\z\j\4\x\3\5\l\1\k\9\q\4\o\s\q\4\2\v\2\2\u\f\9\o\p\c\7\e\y\t\j\l\g\o\6\h\h\8\3\4\m\f\9\7\z\4\k\n\2\9\d\t\q\8\s\9\p\e\9\1\i\r\k\x\r\9\x\i\i\v\z\q\a\m\t\b\i\2\t\j\a\5\y\d\l\j\x\v\m\y\6\x\5\7\2\g\q\h\n\w\1\e\6\1\l\f\d\d\n\1\n\2\m\g\9\o\r\7\g\l\v\o\l\u\f\z\u\q\0\j\0\5\u\y\a\u\7\e\2\s\h\s\z\6\3\7\o\q\6\r\z\i\3\v\w\k\x\g\v\u\l\2\v\4\y\u\v\j\8\e\2\o\m\0\t\1\6\r\v\j\f\k\9\g\i\g\9\f\j\r\t\3\8\d\z\w\6\h\w\q\4\d\2\n\b\a\i\w\z\7\f\d\v\4\m\6\0\c\j\f\c\p\i\w\6\u\h\g\u\m\j\p\j\o\e\h\t\g\o\r\2\x\6\e\7\g\d\7\4\s\p\k\o\h\i\0\v\k\7\z\8\f\r\z\d\w\v\7\7\g\9\0\h\i\a\n\n\n\5\f\v\0\6\r\d\y\5\n\s\y\x\s\v\r\z\7\l\0\p\t\9\m\3\x\h\b\a\1\g\o\y\3\e\g\v\r\3\b\0\2\9\4\x\2\i\4\7\a\a\r\a\0\o\u\7\m\b\3\9\o\n\1\2\d\r\h\9\c\p\w\n\5\e\l\f\4\x\f\2\k\u\v\3\e\s\k\g\m\1\9\0\6\c\p\e\w\e\m\g\y\j\g\x\v\t\d\e\i\z\9\b\j\4\d\u\c\9\z\u\8\7\2\t\f\1\d\8\c\4\9\r\l\y\m\x\x\j\y\7\8\s\6\5\y\j\d\l\b\c\q\4\6\g\f\d\4\a\c\w\m\a\2\n\c\w\u\t\h\y\h\8\0\0\r\e\s\8\o\y\q\o\o\u\1\c\z\p\v\o\t\z\v\9\3\w\1\q\9\h\m\m\k\p\7\l\f\r\i\b\2\z\v\y\7\6\m\k\y\a\p\b\k\o\s\y\t\j\p\z\z\r\c\q\b\t\a\h\c\1\s\2\9\a\z\u\v\5\r\s\3\d\d\c\6\2\3\c\7\f\h\5\f\o\g\5\2\5\o\7\z\3\r\l\a\u\4\m\8\e\b\i\j\3\n\0\l\6\f\m\7\0\4\y\b\7\l\z\y\3\h\s\k\o\p\p\i\t\s\d\5\2\w\h\b\d\l\c\v\o\5\m\6\b\3\6\j\p\6\i\b\u\c\v\o\u\k\b\f\c\u\v\6\k\9\q\8\n\p\r\4\c\d\w\q\g\n\8\5\2\c\5\j\o\h\z\g\8\y\4\i\i\d\j\a\a\g\h\o\u\m\a\4\n\s\g\k\9\j\8\l\v\9\0\5\x\g\9\4\q\a\8\a\v\3\k\c\l\x\b\e\r\t\j\n\p\6\a\y\i\2\j\f\y\u\z\y\j\g\7\i\o\2\i\i\j\h\q\o\e\n\z\e\j\u\q\b\7\4\n\9\c\2\r\q\j\v\k\z\m\f\f\b\z\5\i\v\2\5\e\z\l\d\a\x\q\i\w\2\v\b\o\7\t\d\x\u\3\y\b\4\2\z\p\b\z\4\z\h\q\7\i\8\b\0\n\1\m\h\0\z\1\0\z\5\t\j\w\q\j\p\n\p\2\l\p\7\w\p\h\r\o\i\h\i\3\h\t\o\h\a\n\2\8\5\g\4\8\k\h\x\a\p\c\s\x\y\2\u\1\d\n\z\e\k\r\n\h\8\j\9\o\d\5\2\6\2\6\g\e\e\i\d\3\n\a\5\s\h\s\p\p\m\5\1\8\o\f\1\8\l\9\m\z\1\8\1\n\z\5\y\i\y\6\1\j\v\z\w\a\6\h\g\4\z\g\v\m\i\1\h\c\f\6\z\i\i\u\0\3\4\y\x\j\q\o\f\q\f\j\b\p\4\a\9\q\k\t\3\l\e\l\7\n\k\l\2\c\a\w\s\6\4\i\w\8\u\3\q\0\u\e\h\j\w\7\4\r\x\5\v\z\p\8\r\5\y\d\u\n\b\a\a\y\1\g\9\a\1\2\r\j\7\s\j\q\4\y\i\5\m\d\8\i\8\t\c\7\0\5\1\2\k\n\i\0\f\1\y\n\o\6\r\c\z\0\2\d\l\p\y\o\8\l\0\y\8\1\9\s\o\g\s\d\l\o\6\d\b\4\1\u\f\s\q\y\n\y\8\v\a\p\k\8\0\f\p\s\r\x\0\x\d\g\a\c\z\c\v\b\e\6\k\z\f\k\1\l\9\a\g\d\t\1\1\i\b\0\n\4\o\g\w\q\b\8\0\2\y\w\s\r\a\x\z\2\c\o\2\f\m\m\5\o\k\s\6\h\a\g\z\g\9\7\h\z\i\0\5\s\y\f\u\x\u\3\n\r\m\3\z\d\5\l\c\9\t\p\9\o\h\2\4\h\j\4\8\f\7\d\m\d\k\j\q\1\d\b\9\u\s\d\b\5\4\l\i\n\z\s\w\y\m\8\2\s\c\5\l\g\h\h\v\u\v\8\7\a\i\b\j\f\5\l\w\3\o\6\m\5\f\v\3\9\f\x\n\0\6\j\5\1\9\f\k\v\m\s\a\0\3\k\r\7\d\f\t\h\w\z\v\e\a\b\1\p\b\q\m\z\o\v\c\5\a\1\h\0\e\m\c\h\6\n\x\l\d\a\2\8\i\2\h\d\c\0\6\f\h\h\b\z\d\f\s\h\u\l\t\i\v\v\l\e\c\t\n\t\n\3\c\q\x\4\6\f\h\3\a\6\6\j\2\x\e\k\k\r\h\2\5\o\k\2\4\4\0\n\x\i\9\u\0\7\3\g\g\4\i\1\p\7\2\3\s\3\g\t\s\d\j\d\y\v\j\4\i\a\z\y\2\t\m\2\j\n\d\w\t\c\6\u\4\e\x\b\6\j\u\i\i\n\v\y\6\r\b\u\a\t\5\x\b\m\3\f\a\t\i\l\z\g\m\t\o\c\t\0\i\j\f\x\8\h\o\w\0\j\4\h\8\u\n\j\3\c\7\a\v\h\w\b\8\q\p\0\f\g\c\m\e\d\2\q\n\a\r\o\5\d\y\g\7\q\v\3\r\w\r\e\k\l\2\6\m\t\g\h\v\x\h\d\x\e\g\1\g\m\6\t\c\s\v\q\4\q\f\3\6\p\c\6\r\1\s\8\j\a\5\y\7\u\5\1\p\1\b\b\z\4\v\9\m\n\t\3\1\0\v\1\8\t\x\i\i\h\i\9\2\8\h\t\c\8\d\n\6\1\c\7\7\x\h\4\x\7\u\x\e\c\t\x\f\h\v\9\h\n\3\j\p\y\c\y\o\u\8\b\c\h\x\m\y\l\j\7\z\o\n\v\7\b\h\z\h\3\4\o\b\y\v\j\e\c\u\y\s\7\b\0\2\c\6\n\5\x\s\s\m\w\4\y\6\i\2\2\b\d\3\n\i\s\w\b\q\y\3\x\b\a\1\b\h\t\2\v\q\k\d\p\l\n\g\j\t\5\l\q\a\8\2\r\m\k\i\y\g\o\9\l\3\9\2\6\0\1\8\d\h\o\n\n\x\g\f\k\j\j\w\4\h\p\e\f\t\f\e\z\t\g\1\p\m\5\7\q\j\h\6\2\i\1\a\c\u\q\k\e\b\t\1\o\9\g\u\i\w\8\d\p\r\e\q\w\1\j\o\v\a\g\x\3\3\b\b\x\v\q\l\i\9\h\t\9\v\x\s\7\x\g\p\t\c\b\j\r\j\3\1\4\w\a\n\v\m\q\b\e\r\n\0\o\3\h\5\g\j\9\v\r\3\l\7\v\4\9\k\l\d\c\p\x\k\e\8\a\r\w\j\z\o\d\q\2\w\8\l\u\4\i\l\o\0\v\0\e\6\0\5\r\z\3\b\2\t\e\z\x\f\f\8\t\l\9\2\b\b\9\2\3\v\y\2\k\e\0\r\6\a\w\0\5\m\0\g\j\f\5\6\f\2\2\p\t\f\l\q\1\n\u\x\3\z\5\q\k\7\a\d\m\k\g\h\q\f\a\v\x\j\a\w\7\y\f\d\i\8\h\x\1\b\y\j\7\0\o\w\6\v\h\a\b\b\l\u\e\q\v\x\b\8\j\m\n\a\p\h\m\7\3\4\1\p\c\0\e\x\1\e\i\j\s\9\j\q\q\6\r\b\7\z\p\8\4\5\6\u\o\9\k\h\u\2\i\m\u\a\9\k\h\b\j\7\o\5\r\d\n\t\h\l\w\d\4\n\2\q\o\0\3\x\w\s\d\x\5\o\a\s\u\s\2\f\o\w\3\9\f\n\i\m\g\6\9\i\a\y\h\4\4\7\s\1\f\z\f\2\e\6\1\k\7\z\9\o\n\u\8\7\i\a\h\2\p\7\p\0\p\i\v\s\d\e\b\i\y\n\i\5\r\f\d\0\r\1\i\l\x\z\t\a\x\s\k\k\9\s\p\k\f\4\2\i\v\6\t\y\d\i\i\u\4\k\e\5\e\r\u\h\6\0\s\9\j\y\3\2\e\f\v\o\1\5\1\h\3\t\z\y\8\3\n\r\p\9\l\k\w\5\h\0\y\t\k\3\9\d\p\3\d\x\z\a\2\g\r\p\d\q\f\g\3\x\o\n\p\1\j\r\0\l\z\t\a\k\u\6\o\m\n\g\v\0\i\l\d\5\w\1\q\9\d\v\l\q\i\0\m\i\z\b\a\0\s\7\b\h\7\9\r\k\4\t\g\m\r\l\0\o\e\q\b\x\7\p\3\g\1\q\p\0\u\t\n\g\i\0\v\h\p\k\0\b\s\d\o\g\3\w\m\6\s\c\l\g\v\x\m\k\u\q\s\4\l\b\t\n\7\u\c\s\z\n\3\o\j\h\8\u\l\f\w\z\6\z\4\2\m\4\7\k\1\3\k\x\c\l\3\k\r\y\r\x\f\e\8\n\c\u\3\z\9\n\w\y\c\4\9\u\w\p\o\b\6\1\a\b\4\f\g\j\z\y\w\0\0\0\0\f\q\7\d\x\c\o\g\n\w\n\g\y\q\u\m\x\m\h\v\q\5\9\q\5\c\x\5\6\8\w\6\j\x\a\6\9\q\c\h\7\7\d\q\h\4\b\2\1\v\7\1\1\h\p\m\s\s\3\y\w\u\a\c\r\s\z\6\i\y\8\6\x\a\b\z\n\8\6\7\9\2\3\u\p\l\2\a\f\f\t\w\o\t\n\7\2\4\a\q\b\q\x\w\2\l\z\u\6\0\e\3\f\2\o\q\9\x\d\z\r\d\5\8\b\y\q\6\m\a\k\n\e\3\u\m\r\y\x\z\q\g\o\i\q\a\i\m\g\v\v\0\w\5\l\z\5\k\3\f\j\x\q\2\3\b\7\q\1\6\n\b\n\w\w\l\9\f\3\8\2\o\m\k\3\a\m\w\4\h\a\4\7\6\i\d\4\l\h\m\w\6\m\0\v\z\m\c\u\3\k\9\2\7\4\8\w\e\l\b\8\h\6\5\d\v\0\v\d\3\u\f\x\n\9\5\b\q\l\v\3\k\i\u\r\0\h\1\d\d\b\h\d\h\8\3\k\s\r\4\w\9\h\n\e\x\e\d\y\6\s\s\o\z\2\a\4\q\s\b\a\a\u\x\b\z\7\k\h\0\6\4\j\8\2\u\l\p\v\m\q\o\n\2\9\6\h\w\v\y\r\3\w\n\f\1\n\7\3\u\o\f\6\8\y\a\e\8\s\3\5\a\p\t\a\q\d\5\y\q\4\z\k\4\p\k\r\6\z\4\r\f\y\f\s\3\i\9\d\s\2\k\6\f\f\q\k\0\o\e\v\g\z\e\2\c\t\2\i\y\g\r\q\w\h\p\t\e\9\k\t\3\s\6\u\l\i\j\y\7\8\n\d\w\6\c\p\2\6\1\f\q\s\6\f\o\7\4\9\c\i\s\s\i\6\f\7\p\b\g\j\z\8\y\f\d\g\q\k\4\i\y\f\y\3\y\b\b\b\v\f\m\7\9\a\g\w\g\m\t\g\a\g\3\z\k\z\g\u\5\j\6\5\9\i\0\j\a\v\4\l\r\5\v\p\d\e\d\u\x\k\p\6\1\3\3\4\3\q\t\0\q\i\3\u\e\4\e\d\2\2\g\0\3\7\v\n\9\i\9\d\4\7\q\v\y\h\5\q\g\r\7\j\3\a\b\u\x\0\r\y\7\4\l\3\n\1\j\v\a\e\n\c\4\b\h\z\c\a\q\0\t\c\h\1\k\x\i\5\k\m\i\6\w\8\1\v\5\2\z\j\v\x\p\h\f\9\0\k\x\t\k\o\b\l\u\l\e\k\b\g\b\f\p\f\v\a\5\2\x\5\g\0\u\v\x\p\k\k\2\q\0\v\o\t\p\x\2\h\d\i\r\d\d\1\j\n\i\r\q\0\n\o\q\5\9\j\i\4\n\n\6\1\0\o\7\w\g\g\n\y\4\r\u\d\d\8\2\s\e\p\4\g\t\d\4\g\9\k\n\0\b\f\c\o\y\s\d\u\m\r\l\u\r\l\c\h\2\l\y\v\c\w\r\6\2\c\h\w\a\6\9\w\h\r\b\y\d\w\d\6\0\y\2\d\1\y\v\l\z\6\4\b\b\6\1\5\u\s\2\f\u\y\k\a\c\3\t\k\7\d\r\g\5\c\3\8\8\v\e\5\x\r\8\b\d\p\2\u\f\p\5\u\i\t\1\u\w\6\t\d\n\z\0\x\x\p\o\1\u\f\y\u\m\g\i\k\j\d\v\s\0\j\8\f\o\k\v\p\h\k\g\k\a\1\5\9\w\b\5\9\e\2\f\p\m\v\5\w\3\s\n\j\a\w\w\2\a\b\z\2\7\u\m\f\6\5\e\t\e\i\n\e\s\j\8\x\5\u\o\z\t\x\0\y\a\o\i\7\0\n\r\f\5\j\1\v\z\4\h\5\e\h\p\v\t\8\s\x\q\d\g\f\5\f\o\q\r\i\z\c\h\k\l\z\c\7\1\r\p\x\o\f\2\b\9\x\p\n\u\k\l\1\p\i\2\8\9\b\d\a\f\m\s\g\c\6\e\y\a\q\7\l\t\6\2\e\d\o\k\k\g\r\z\w\g\w\d\6\e\p\p\x\o\c\t\e\4\i\s\h\b\c\z\8\x\s\b\5\u\o\w\1\r\j\8\y\6\r\9\d\1\x\q\b\y\3\k\a\l\k\i\l\6\i\1\y\f\g\s\w\f\1\y\n\j\a\6\g\e\l\s\r\b\b\5\b\d\y\e\2\g\9\s\o\y\o\h\k\r\3\i\5\7\7\i\c\e\a\t\6\7\1\a\r\j\9\v\l\u\w\6\1\6\h\l\l\p\m\k\e\v\h\2\o\x\9\p\w\k\v\5\z\k\5\5\q\n\i\7\e\9\n\m\2\4\d\g\h\9\b\v\v\8\2\3\6\w\r\o\c\0\p\j\o\c\w\c\6\e\s\r\w\a\r\d\b\f\g\v\d\t\d\o\o\i\s\w\x\a\i\3\s\h\m\f\3\o\q\4\9\3\4\k\8\r\b\0\0\m\o\9\y\m\n\q\v\j\n\p\g\k\5\d\8\x\7\b\i\k\x\i\k\w\9\v\1\9\w\r\s\f\2\y\b\c\o\4\t\j\0\j\a\m\v\p\j\i\9\a\d\q\1\f\0\n\k\1\1\u\l\h\4\c\r\r\f\z\q\y\n\4\m\0\m\d\c\t\c\v\f\w\d\n\p\h\y\8\o\0\g\0\x\y\9\l\f\z\8\v\1\w\s\1\v\p\g\6\t\j\5\s\b\y\8\6\a\s\z\j\x\b\v\n\e\e\c\s\i\m\2\d\m\p\c\m\e\e\k\c\u\2\7\2\j\h\d\j\v\8\i\2\x\k\f\2\5\u\k\l\v\s\2\z\z\9\t\c\e\g\a\w\z\8\8\p\j\e\9\g\c\p\9\t\z\g\w\4\5\7\0\0\z\d\e\9\5\l\r\d\7\f\d\8\i\s\g\i\f\o\t\x\y\m\s\f\3\a\2\t\o\i\r\7\0\0\d\y\8\v\1\q\a\3\9\m\z\g\g\x\q\o\y\f\o\z\h\u\p\s\y\t\d\0\k\l\a\4\b\u\k\j\x\y\y\z\v\e\c ]] 00:07:45.256 00:07:45.256 real 0m1.055s 00:07:45.256 user 0m0.746s 00:07:45.256 sys 0m0.420s 00:07:45.256 05:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.256 05:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:45.256 05:54:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:45.256 05:54:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:45.256 05:54:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:45.256 05:54:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:45.256 05:54:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:45.256 05:54:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:45.256 05:54:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:45.256 05:54:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:45.256 05:54:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:45.256 05:54:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:45.256 05:54:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:45.515 05:54:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.515 [2024-07-13 05:54:37.026650] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:45.515 [2024-07-13 05:54:37.026741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74769 ] 00:07:45.515 { 00:07:45.515 "subsystems": [ 00:07:45.515 { 00:07:45.515 "subsystem": "bdev", 00:07:45.515 "config": [ 00:07:45.515 { 00:07:45.515 "params": { 00:07:45.515 "trtype": "pcie", 00:07:45.515 "traddr": "0000:00:10.0", 00:07:45.515 "name": "Nvme0" 00:07:45.515 }, 00:07:45.515 "method": "bdev_nvme_attach_controller" 00:07:45.515 }, 00:07:45.515 { 00:07:45.515 "method": "bdev_wait_for_examine" 00:07:45.515 } 00:07:45.515 ] 00:07:45.515 } 00:07:45.515 ] 00:07:45.515 } 00:07:45.515 [2024-07-13 05:54:37.160198] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.515 [2024-07-13 05:54:37.201700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.515 [2024-07-13 05:54:37.236651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.774  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:45.774 00:07:45.774 05:54:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.774 ************************************ 00:07:45.774 END TEST spdk_dd_basic_rw 00:07:45.774 ************************************ 00:07:45.774 00:07:45.774 real 0m14.408s 00:07:45.774 user 0m10.441s 00:07:45.774 sys 0m4.653s 00:07:45.774 05:54:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.774 05:54:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.033 05:54:37 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:46.033 05:54:37 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:46.033 05:54:37 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.033 05:54:37 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.033 05:54:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:46.033 ************************************ 00:07:46.033 START TEST spdk_dd_posix 00:07:46.033 ************************************ 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:46.033 * Looking for test storage... 00:07:46.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:46.033 * First test run, liburing in use 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:46.033 ************************************ 00:07:46.033 START TEST dd_flag_append 00:07:46.033 ************************************ 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=4pny3i15xy6vlkgb7pmwceou1xoa2cr6 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=zi3kitvfos5oav9l9avnkioftjypo67u 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 4pny3i15xy6vlkgb7pmwceou1xoa2cr6 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s zi3kitvfos5oav9l9avnkioftjypo67u 00:07:46.033 05:54:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:46.033 [2024-07-13 05:54:37.668342] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:46.033 [2024-07-13 05:54:37.668462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74828 ] 00:07:46.292 [2024-07-13 05:54:37.807327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.292 [2024-07-13 05:54:37.847912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.292 [2024-07-13 05:54:37.881986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:46.292  Copying: 32/32 [B] (average 31 kBps) 00:07:46.292 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ zi3kitvfos5oav9l9avnkioftjypo67u4pny3i15xy6vlkgb7pmwceou1xoa2cr6 == \z\i\3\k\i\t\v\f\o\s\5\o\a\v\9\l\9\a\v\n\k\i\o\f\t\j\y\p\o\6\7\u\4\p\n\y\3\i\1\5\x\y\6\v\l\k\g\b\7\p\m\w\c\e\o\u\1\x\o\a\2\c\r\6 ]] 00:07:46.551 00:07:46.551 real 0m0.409s 00:07:46.551 user 0m0.199s 00:07:46.551 sys 0m0.171s 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:46.551 ************************************ 00:07:46.551 END TEST dd_flag_append 00:07:46.551 ************************************ 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:46.551 ************************************ 00:07:46.551 START TEST dd_flag_directory 00:07:46.551 ************************************ 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.551 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.551 [2024-07-13 05:54:38.132012] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:46.551 [2024-07-13 05:54:38.132104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74856 ] 00:07:46.551 [2024-07-13 05:54:38.270374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.810 [2024-07-13 05:54:38.304997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.810 [2024-07-13 05:54:38.332914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:46.810 [2024-07-13 05:54:38.347192] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:46.810 [2024-07-13 05:54:38.347244] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:46.810 [2024-07-13 05:54:38.347273] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.810 [2024-07-13 05:54:38.415208] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.810 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:46.810 [2024-07-13 05:54:38.535568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:46.810 [2024-07-13 05:54:38.535660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74860 ] 00:07:47.069 [2024-07-13 05:54:38.671300] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.069 [2024-07-13 05:54:38.709259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.069 [2024-07-13 05:54:38.737524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:47.069 [2024-07-13 05:54:38.751836] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:47.069 [2024-07-13 05:54:38.751902] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:47.069 [2024-07-13 05:54:38.751932] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.328 [2024-07-13 05:54:38.810520] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:47.328 ************************************ 00:07:47.328 END TEST dd_flag_directory 00:07:47.328 ************************************ 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:47.328 00:07:47.328 real 0m0.806s 00:07:47.328 user 0m0.408s 00:07:47.328 sys 0m0.190s 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:47.328 ************************************ 00:07:47.328 START TEST dd_flag_nofollow 00:07:47.328 ************************************ 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.328 05:54:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:47.328 [2024-07-13 05:54:38.992791] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:47.328 [2024-07-13 05:54:38.992896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74894 ] 00:07:47.627 [2024-07-13 05:54:39.129056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.627 [2024-07-13 05:54:39.163695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.627 [2024-07-13 05:54:39.194649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:47.627 [2024-07-13 05:54:39.209734] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:47.627 [2024-07-13 05:54:39.209818] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:47.627 [2024-07-13 05:54:39.209832] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.627 [2024-07-13 05:54:39.267683] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:47.627 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:47.627 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:47.627 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:47.627 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:47.627 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:47.627 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:47.627 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:47.627 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:47.627 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:47.627 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.627 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.627 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.627 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.627 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.913 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.913 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.913 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.913 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:47.913 [2024-07-13 05:54:39.389819] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:47.914 [2024-07-13 05:54:39.389924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74898 ] 00:07:47.914 [2024-07-13 05:54:39.526600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.914 [2024-07-13 05:54:39.561058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.914 [2024-07-13 05:54:39.589630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:47.914 [2024-07-13 05:54:39.604031] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:47.914 [2024-07-13 05:54:39.604099] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:47.914 [2024-07-13 05:54:39.604113] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.172 [2024-07-13 05:54:39.665091] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:48.172 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:48.172 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:48.172 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:48.172 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:48.172 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:48.172 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:48.172 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:48.172 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:48.172 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:48.172 05:54:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.172 [2024-07-13 05:54:39.785704] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:48.172 [2024-07-13 05:54:39.785804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74906 ] 00:07:48.431 [2024-07-13 05:54:39.921584] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.431 [2024-07-13 05:54:39.962642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.431 [2024-07-13 05:54:39.997313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.431  Copying: 512/512 [B] (average 500 kBps) 00:07:48.431 00:07:48.431 05:54:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ fdazcur94xklmhn9fwoqln3k6x50aink0t8oljaf707s365vqt57h3hxja5dcy253rwev8xpb3to0jejsaqgucsaw1it6tfwmj2x8dycvm0mc2kssal2ohfbel1gx0kj9t7gidqkieq345tnuezyoq54m7smiheisl30ug2o9ao29ifxywq8w5avuoroyg29pa94ydzsrg3geuoctkqxtn2ua03s29qzk57kh0r44jqpm1czzpaz53y76wvbprm42zsqzukgeq42ojznxtizpsi0wzxlx7d2dcwhl0h8te8mm0kwhbk6qoosp73igql9ia7lb58sbgq6wljotusaxzebw0jr19036oqnyaocgzx2lr8bdnt1q03llf3p1z24czt3t83x6oeswzcq72fk9fu2npmo3ci4h55n0q634zoy3iu3ddgp29ywt6x149dyf6wjx7c2w8p3vfqds03vhxazt7u29a0wbdm8o362t7yxl3bfujkf5l0f89fqrzsl == \f\d\a\z\c\u\r\9\4\x\k\l\m\h\n\9\f\w\o\q\l\n\3\k\6\x\5\0\a\i\n\k\0\t\8\o\l\j\a\f\7\0\7\s\3\6\5\v\q\t\5\7\h\3\h\x\j\a\5\d\c\y\2\5\3\r\w\e\v\8\x\p\b\3\t\o\0\j\e\j\s\a\q\g\u\c\s\a\w\1\i\t\6\t\f\w\m\j\2\x\8\d\y\c\v\m\0\m\c\2\k\s\s\a\l\2\o\h\f\b\e\l\1\g\x\0\k\j\9\t\7\g\i\d\q\k\i\e\q\3\4\5\t\n\u\e\z\y\o\q\5\4\m\7\s\m\i\h\e\i\s\l\3\0\u\g\2\o\9\a\o\2\9\i\f\x\y\w\q\8\w\5\a\v\u\o\r\o\y\g\2\9\p\a\9\4\y\d\z\s\r\g\3\g\e\u\o\c\t\k\q\x\t\n\2\u\a\0\3\s\2\9\q\z\k\5\7\k\h\0\r\4\4\j\q\p\m\1\c\z\z\p\a\z\5\3\y\7\6\w\v\b\p\r\m\4\2\z\s\q\z\u\k\g\e\q\4\2\o\j\z\n\x\t\i\z\p\s\i\0\w\z\x\l\x\7\d\2\d\c\w\h\l\0\h\8\t\e\8\m\m\0\k\w\h\b\k\6\q\o\o\s\p\7\3\i\g\q\l\9\i\a\7\l\b\5\8\s\b\g\q\6\w\l\j\o\t\u\s\a\x\z\e\b\w\0\j\r\1\9\0\3\6\o\q\n\y\a\o\c\g\z\x\2\l\r\8\b\d\n\t\1\q\0\3\l\l\f\3\p\1\z\2\4\c\z\t\3\t\8\3\x\6\o\e\s\w\z\c\q\7\2\f\k\9\f\u\2\n\p\m\o\3\c\i\4\h\5\5\n\0\q\6\3\4\z\o\y\3\i\u\3\d\d\g\p\2\9\y\w\t\6\x\1\4\9\d\y\f\6\w\j\x\7\c\2\w\8\p\3\v\f\q\d\s\0\3\v\h\x\a\z\t\7\u\2\9\a\0\w\b\d\m\8\o\3\6\2\t\7\y\x\l\3\b\f\u\j\k\f\5\l\0\f\8\9\f\q\r\z\s\l ]] 00:07:48.431 00:07:48.431 real 0m1.218s 00:07:48.431 user 0m0.607s 00:07:48.431 sys 0m0.376s 00:07:48.431 05:54:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.431 ************************************ 00:07:48.431 END TEST dd_flag_nofollow 00:07:48.431 ************************************ 00:07:48.431 05:54:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:48.690 ************************************ 00:07:48.690 START TEST dd_flag_noatime 00:07:48.690 ************************************ 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1720850080 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1720850080 00:07:48.690 05:54:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:49.627 05:54:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.627 [2024-07-13 05:54:41.271233] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:49.627 [2024-07-13 05:54:41.271338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74948 ] 00:07:49.886 [2024-07-13 05:54:41.411467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.886 [2024-07-13 05:54:41.454213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.886 [2024-07-13 05:54:41.488051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.144  Copying: 512/512 [B] (average 500 kBps) 00:07:50.144 00:07:50.144 05:54:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.144 05:54:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1720850080 )) 00:07:50.144 05:54:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.144 05:54:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1720850080 )) 00:07:50.144 05:54:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.144 [2024-07-13 05:54:41.725459] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:50.144 [2024-07-13 05:54:41.725569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74956 ] 00:07:50.144 [2024-07-13 05:54:41.864544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.404 [2024-07-13 05:54:41.907957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.404 [2024-07-13 05:54:41.941879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.404  Copying: 512/512 [B] (average 500 kBps) 00:07:50.404 00:07:50.404 05:54:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.404 05:54:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1720850081 )) 00:07:50.404 00:07:50.404 real 0m1.911s 00:07:50.404 user 0m0.466s 00:07:50.404 sys 0m0.391s 00:07:50.404 ************************************ 00:07:50.404 END TEST dd_flag_noatime 00:07:50.404 ************************************ 00:07:50.404 05:54:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.404 05:54:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:50.663 ************************************ 00:07:50.663 START TEST dd_flags_misc 00:07:50.663 ************************************ 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:50.663 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:50.663 [2024-07-13 05:54:42.212998] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:50.663 [2024-07-13 05:54:42.213092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74990 ] 00:07:50.663 [2024-07-13 05:54:42.353598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.922 [2024-07-13 05:54:42.397156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.922 [2024-07-13 05:54:42.431087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.922  Copying: 512/512 [B] (average 500 kBps) 00:07:50.922 00:07:50.922 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ erosf7kc1kxn2yj1fufpkmzalbdcwxdvwcf5ljn186npton8a65z7h2s9o65a6bxqee03zhqm1va6xwcvm3w6v5tpf8s8f350cbikdzeh231fnks04a8hkubhxfbak7vu83ejcvzd3itl1y5uqmsh10bpcth1jasbfp34d5ay0jnvgruwscyfel5wngyg1oiod3ab9nludofzv9u9e5j97d6vw8utddadr59p2pxyolnkocwooog2tp4xd8kan27v8ed7jau7v1wsk8z6tmr3g3f4hffv3d311hkdwjnttxr2hxt45m211h9tme9y7rhzswyre87o8u59euzrzzsy8tc3zvrbaz5146q2vayhlzv8p8au1o0y5dramjdpvdaimltsa0mn0alu18de1rhuql65m75y2mluw17rsr7wjmnq4r6600hmksp7yuxny6x09xh3uqqhbo8gzjh5190jmety8owmfa8lijkiadw70z9axpehv97gg376oagrm8o == \e\r\o\s\f\7\k\c\1\k\x\n\2\y\j\1\f\u\f\p\k\m\z\a\l\b\d\c\w\x\d\v\w\c\f\5\l\j\n\1\8\6\n\p\t\o\n\8\a\6\5\z\7\h\2\s\9\o\6\5\a\6\b\x\q\e\e\0\3\z\h\q\m\1\v\a\6\x\w\c\v\m\3\w\6\v\5\t\p\f\8\s\8\f\3\5\0\c\b\i\k\d\z\e\h\2\3\1\f\n\k\s\0\4\a\8\h\k\u\b\h\x\f\b\a\k\7\v\u\8\3\e\j\c\v\z\d\3\i\t\l\1\y\5\u\q\m\s\h\1\0\b\p\c\t\h\1\j\a\s\b\f\p\3\4\d\5\a\y\0\j\n\v\g\r\u\w\s\c\y\f\e\l\5\w\n\g\y\g\1\o\i\o\d\3\a\b\9\n\l\u\d\o\f\z\v\9\u\9\e\5\j\9\7\d\6\v\w\8\u\t\d\d\a\d\r\5\9\p\2\p\x\y\o\l\n\k\o\c\w\o\o\o\g\2\t\p\4\x\d\8\k\a\n\2\7\v\8\e\d\7\j\a\u\7\v\1\w\s\k\8\z\6\t\m\r\3\g\3\f\4\h\f\f\v\3\d\3\1\1\h\k\d\w\j\n\t\t\x\r\2\h\x\t\4\5\m\2\1\1\h\9\t\m\e\9\y\7\r\h\z\s\w\y\r\e\8\7\o\8\u\5\9\e\u\z\r\z\z\s\y\8\t\c\3\z\v\r\b\a\z\5\1\4\6\q\2\v\a\y\h\l\z\v\8\p\8\a\u\1\o\0\y\5\d\r\a\m\j\d\p\v\d\a\i\m\l\t\s\a\0\m\n\0\a\l\u\1\8\d\e\1\r\h\u\q\l\6\5\m\7\5\y\2\m\l\u\w\1\7\r\s\r\7\w\j\m\n\q\4\r\6\6\0\0\h\m\k\s\p\7\y\u\x\n\y\6\x\0\9\x\h\3\u\q\q\h\b\o\8\g\z\j\h\5\1\9\0\j\m\e\t\y\8\o\w\m\f\a\8\l\i\j\k\i\a\d\w\7\0\z\9\a\x\p\e\h\v\9\7\g\g\3\7\6\o\a\g\r\m\8\o ]] 00:07:50.922 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:50.922 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:50.922 [2024-07-13 05:54:42.636597] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:50.922 [2024-07-13 05:54:42.636692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74994 ] 00:07:51.180 [2024-07-13 05:54:42.777153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.180 [2024-07-13 05:54:42.826638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.180 [2024-07-13 05:54:42.874875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:51.437  Copying: 512/512 [B] (average 500 kBps) 00:07:51.437 00:07:51.438 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ erosf7kc1kxn2yj1fufpkmzalbdcwxdvwcf5ljn186npton8a65z7h2s9o65a6bxqee03zhqm1va6xwcvm3w6v5tpf8s8f350cbikdzeh231fnks04a8hkubhxfbak7vu83ejcvzd3itl1y5uqmsh10bpcth1jasbfp34d5ay0jnvgruwscyfel5wngyg1oiod3ab9nludofzv9u9e5j97d6vw8utddadr59p2pxyolnkocwooog2tp4xd8kan27v8ed7jau7v1wsk8z6tmr3g3f4hffv3d311hkdwjnttxr2hxt45m211h9tme9y7rhzswyre87o8u59euzrzzsy8tc3zvrbaz5146q2vayhlzv8p8au1o0y5dramjdpvdaimltsa0mn0alu18de1rhuql65m75y2mluw17rsr7wjmnq4r6600hmksp7yuxny6x09xh3uqqhbo8gzjh5190jmety8owmfa8lijkiadw70z9axpehv97gg376oagrm8o == \e\r\o\s\f\7\k\c\1\k\x\n\2\y\j\1\f\u\f\p\k\m\z\a\l\b\d\c\w\x\d\v\w\c\f\5\l\j\n\1\8\6\n\p\t\o\n\8\a\6\5\z\7\h\2\s\9\o\6\5\a\6\b\x\q\e\e\0\3\z\h\q\m\1\v\a\6\x\w\c\v\m\3\w\6\v\5\t\p\f\8\s\8\f\3\5\0\c\b\i\k\d\z\e\h\2\3\1\f\n\k\s\0\4\a\8\h\k\u\b\h\x\f\b\a\k\7\v\u\8\3\e\j\c\v\z\d\3\i\t\l\1\y\5\u\q\m\s\h\1\0\b\p\c\t\h\1\j\a\s\b\f\p\3\4\d\5\a\y\0\j\n\v\g\r\u\w\s\c\y\f\e\l\5\w\n\g\y\g\1\o\i\o\d\3\a\b\9\n\l\u\d\o\f\z\v\9\u\9\e\5\j\9\7\d\6\v\w\8\u\t\d\d\a\d\r\5\9\p\2\p\x\y\o\l\n\k\o\c\w\o\o\o\g\2\t\p\4\x\d\8\k\a\n\2\7\v\8\e\d\7\j\a\u\7\v\1\w\s\k\8\z\6\t\m\r\3\g\3\f\4\h\f\f\v\3\d\3\1\1\h\k\d\w\j\n\t\t\x\r\2\h\x\t\4\5\m\2\1\1\h\9\t\m\e\9\y\7\r\h\z\s\w\y\r\e\8\7\o\8\u\5\9\e\u\z\r\z\z\s\y\8\t\c\3\z\v\r\b\a\z\5\1\4\6\q\2\v\a\y\h\l\z\v\8\p\8\a\u\1\o\0\y\5\d\r\a\m\j\d\p\v\d\a\i\m\l\t\s\a\0\m\n\0\a\l\u\1\8\d\e\1\r\h\u\q\l\6\5\m\7\5\y\2\m\l\u\w\1\7\r\s\r\7\w\j\m\n\q\4\r\6\6\0\0\h\m\k\s\p\7\y\u\x\n\y\6\x\0\9\x\h\3\u\q\q\h\b\o\8\g\z\j\h\5\1\9\0\j\m\e\t\y\8\o\w\m\f\a\8\l\i\j\k\i\a\d\w\7\0\z\9\a\x\p\e\h\v\9\7\g\g\3\7\6\o\a\g\r\m\8\o ]] 00:07:51.438 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:51.438 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:51.438 [2024-07-13 05:54:43.087891] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:51.438 [2024-07-13 05:54:43.088001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75004 ] 00:07:51.695 [2024-07-13 05:54:43.225023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.695 [2024-07-13 05:54:43.266270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.695 [2024-07-13 05:54:43.299274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:51.954  Copying: 512/512 [B] (average 125 kBps) 00:07:51.954 00:07:51.954 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ erosf7kc1kxn2yj1fufpkmzalbdcwxdvwcf5ljn186npton8a65z7h2s9o65a6bxqee03zhqm1va6xwcvm3w6v5tpf8s8f350cbikdzeh231fnks04a8hkubhxfbak7vu83ejcvzd3itl1y5uqmsh10bpcth1jasbfp34d5ay0jnvgruwscyfel5wngyg1oiod3ab9nludofzv9u9e5j97d6vw8utddadr59p2pxyolnkocwooog2tp4xd8kan27v8ed7jau7v1wsk8z6tmr3g3f4hffv3d311hkdwjnttxr2hxt45m211h9tme9y7rhzswyre87o8u59euzrzzsy8tc3zvrbaz5146q2vayhlzv8p8au1o0y5dramjdpvdaimltsa0mn0alu18de1rhuql65m75y2mluw17rsr7wjmnq4r6600hmksp7yuxny6x09xh3uqqhbo8gzjh5190jmety8owmfa8lijkiadw70z9axpehv97gg376oagrm8o == \e\r\o\s\f\7\k\c\1\k\x\n\2\y\j\1\f\u\f\p\k\m\z\a\l\b\d\c\w\x\d\v\w\c\f\5\l\j\n\1\8\6\n\p\t\o\n\8\a\6\5\z\7\h\2\s\9\o\6\5\a\6\b\x\q\e\e\0\3\z\h\q\m\1\v\a\6\x\w\c\v\m\3\w\6\v\5\t\p\f\8\s\8\f\3\5\0\c\b\i\k\d\z\e\h\2\3\1\f\n\k\s\0\4\a\8\h\k\u\b\h\x\f\b\a\k\7\v\u\8\3\e\j\c\v\z\d\3\i\t\l\1\y\5\u\q\m\s\h\1\0\b\p\c\t\h\1\j\a\s\b\f\p\3\4\d\5\a\y\0\j\n\v\g\r\u\w\s\c\y\f\e\l\5\w\n\g\y\g\1\o\i\o\d\3\a\b\9\n\l\u\d\o\f\z\v\9\u\9\e\5\j\9\7\d\6\v\w\8\u\t\d\d\a\d\r\5\9\p\2\p\x\y\o\l\n\k\o\c\w\o\o\o\g\2\t\p\4\x\d\8\k\a\n\2\7\v\8\e\d\7\j\a\u\7\v\1\w\s\k\8\z\6\t\m\r\3\g\3\f\4\h\f\f\v\3\d\3\1\1\h\k\d\w\j\n\t\t\x\r\2\h\x\t\4\5\m\2\1\1\h\9\t\m\e\9\y\7\r\h\z\s\w\y\r\e\8\7\o\8\u\5\9\e\u\z\r\z\z\s\y\8\t\c\3\z\v\r\b\a\z\5\1\4\6\q\2\v\a\y\h\l\z\v\8\p\8\a\u\1\o\0\y\5\d\r\a\m\j\d\p\v\d\a\i\m\l\t\s\a\0\m\n\0\a\l\u\1\8\d\e\1\r\h\u\q\l\6\5\m\7\5\y\2\m\l\u\w\1\7\r\s\r\7\w\j\m\n\q\4\r\6\6\0\0\h\m\k\s\p\7\y\u\x\n\y\6\x\0\9\x\h\3\u\q\q\h\b\o\8\g\z\j\h\5\1\9\0\j\m\e\t\y\8\o\w\m\f\a\8\l\i\j\k\i\a\d\w\7\0\z\9\a\x\p\e\h\v\9\7\g\g\3\7\6\o\a\g\r\m\8\o ]] 00:07:51.954 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:51.954 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:51.954 [2024-07-13 05:54:43.510106] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:51.954 [2024-07-13 05:54:43.510206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75013 ] 00:07:51.954 [2024-07-13 05:54:43.650491] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.213 [2024-07-13 05:54:43.691862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.213 [2024-07-13 05:54:43.724913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.213  Copying: 512/512 [B] (average 250 kBps) 00:07:52.213 00:07:52.213 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ erosf7kc1kxn2yj1fufpkmzalbdcwxdvwcf5ljn186npton8a65z7h2s9o65a6bxqee03zhqm1va6xwcvm3w6v5tpf8s8f350cbikdzeh231fnks04a8hkubhxfbak7vu83ejcvzd3itl1y5uqmsh10bpcth1jasbfp34d5ay0jnvgruwscyfel5wngyg1oiod3ab9nludofzv9u9e5j97d6vw8utddadr59p2pxyolnkocwooog2tp4xd8kan27v8ed7jau7v1wsk8z6tmr3g3f4hffv3d311hkdwjnttxr2hxt45m211h9tme9y7rhzswyre87o8u59euzrzzsy8tc3zvrbaz5146q2vayhlzv8p8au1o0y5dramjdpvdaimltsa0mn0alu18de1rhuql65m75y2mluw17rsr7wjmnq4r6600hmksp7yuxny6x09xh3uqqhbo8gzjh5190jmety8owmfa8lijkiadw70z9axpehv97gg376oagrm8o == \e\r\o\s\f\7\k\c\1\k\x\n\2\y\j\1\f\u\f\p\k\m\z\a\l\b\d\c\w\x\d\v\w\c\f\5\l\j\n\1\8\6\n\p\t\o\n\8\a\6\5\z\7\h\2\s\9\o\6\5\a\6\b\x\q\e\e\0\3\z\h\q\m\1\v\a\6\x\w\c\v\m\3\w\6\v\5\t\p\f\8\s\8\f\3\5\0\c\b\i\k\d\z\e\h\2\3\1\f\n\k\s\0\4\a\8\h\k\u\b\h\x\f\b\a\k\7\v\u\8\3\e\j\c\v\z\d\3\i\t\l\1\y\5\u\q\m\s\h\1\0\b\p\c\t\h\1\j\a\s\b\f\p\3\4\d\5\a\y\0\j\n\v\g\r\u\w\s\c\y\f\e\l\5\w\n\g\y\g\1\o\i\o\d\3\a\b\9\n\l\u\d\o\f\z\v\9\u\9\e\5\j\9\7\d\6\v\w\8\u\t\d\d\a\d\r\5\9\p\2\p\x\y\o\l\n\k\o\c\w\o\o\o\g\2\t\p\4\x\d\8\k\a\n\2\7\v\8\e\d\7\j\a\u\7\v\1\w\s\k\8\z\6\t\m\r\3\g\3\f\4\h\f\f\v\3\d\3\1\1\h\k\d\w\j\n\t\t\x\r\2\h\x\t\4\5\m\2\1\1\h\9\t\m\e\9\y\7\r\h\z\s\w\y\r\e\8\7\o\8\u\5\9\e\u\z\r\z\z\s\y\8\t\c\3\z\v\r\b\a\z\5\1\4\6\q\2\v\a\y\h\l\z\v\8\p\8\a\u\1\o\0\y\5\d\r\a\m\j\d\p\v\d\a\i\m\l\t\s\a\0\m\n\0\a\l\u\1\8\d\e\1\r\h\u\q\l\6\5\m\7\5\y\2\m\l\u\w\1\7\r\s\r\7\w\j\m\n\q\4\r\6\6\0\0\h\m\k\s\p\7\y\u\x\n\y\6\x\0\9\x\h\3\u\q\q\h\b\o\8\g\z\j\h\5\1\9\0\j\m\e\t\y\8\o\w\m\f\a\8\l\i\j\k\i\a\d\w\7\0\z\9\a\x\p\e\h\v\9\7\g\g\3\7\6\o\a\g\r\m\8\o ]] 00:07:52.213 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:52.213 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:52.213 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:52.213 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:52.213 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:52.213 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:52.472 [2024-07-13 05:54:43.941486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:52.472 [2024-07-13 05:54:43.941585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75023 ] 00:07:52.472 [2024-07-13 05:54:44.085632] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.472 [2024-07-13 05:54:44.127240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.472 [2024-07-13 05:54:44.160915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.730  Copying: 512/512 [B] (average 500 kBps) 00:07:52.730 00:07:52.730 05:54:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ mxfb862bhqznxhrfb1ubbh6xvrymesf3oxjdz9tm11165bufd4lz3tqd8wmvnreay9ky0nyp2x28tnggceub0yvhh0pews4qb5kovyzswmuychg8lp7h6g7re1wfi8qtc7bv9a06u6588uw2f251mr3bjk8obs31gg05hj06q8zdwbipaw1sqx9yrrshipdlmrpsj9hfl36daocjvdj5dia7d6zajhh0oer2t6jgptarmmoct4fm7re3jh4gc5lo84l1s87piczl2sq50lqh1snmarg5yw9zqwov0ocn5jl6ptpjsol2calg760e3ed1mq4go6hwepp62yb0w4vmmsnkgfnpcxkjw8cxuxnuxfv5r7on38lhsqtlsi32g5zgc7isj7oy57s6l5yfgb7rvxszllqotsaddw1oafz121b4fdtbq7c2f08tzjdn16v5yxeemz5qm2bxoewkv2k82z48yydod3hdwzo1uegbadqv23d3yoiw8xszsqdtnjrm == \m\x\f\b\8\6\2\b\h\q\z\n\x\h\r\f\b\1\u\b\b\h\6\x\v\r\y\m\e\s\f\3\o\x\j\d\z\9\t\m\1\1\1\6\5\b\u\f\d\4\l\z\3\t\q\d\8\w\m\v\n\r\e\a\y\9\k\y\0\n\y\p\2\x\2\8\t\n\g\g\c\e\u\b\0\y\v\h\h\0\p\e\w\s\4\q\b\5\k\o\v\y\z\s\w\m\u\y\c\h\g\8\l\p\7\h\6\g\7\r\e\1\w\f\i\8\q\t\c\7\b\v\9\a\0\6\u\6\5\8\8\u\w\2\f\2\5\1\m\r\3\b\j\k\8\o\b\s\3\1\g\g\0\5\h\j\0\6\q\8\z\d\w\b\i\p\a\w\1\s\q\x\9\y\r\r\s\h\i\p\d\l\m\r\p\s\j\9\h\f\l\3\6\d\a\o\c\j\v\d\j\5\d\i\a\7\d\6\z\a\j\h\h\0\o\e\r\2\t\6\j\g\p\t\a\r\m\m\o\c\t\4\f\m\7\r\e\3\j\h\4\g\c\5\l\o\8\4\l\1\s\8\7\p\i\c\z\l\2\s\q\5\0\l\q\h\1\s\n\m\a\r\g\5\y\w\9\z\q\w\o\v\0\o\c\n\5\j\l\6\p\t\p\j\s\o\l\2\c\a\l\g\7\6\0\e\3\e\d\1\m\q\4\g\o\6\h\w\e\p\p\6\2\y\b\0\w\4\v\m\m\s\n\k\g\f\n\p\c\x\k\j\w\8\c\x\u\x\n\u\x\f\v\5\r\7\o\n\3\8\l\h\s\q\t\l\s\i\3\2\g\5\z\g\c\7\i\s\j\7\o\y\5\7\s\6\l\5\y\f\g\b\7\r\v\x\s\z\l\l\q\o\t\s\a\d\d\w\1\o\a\f\z\1\2\1\b\4\f\d\t\b\q\7\c\2\f\0\8\t\z\j\d\n\1\6\v\5\y\x\e\e\m\z\5\q\m\2\b\x\o\e\w\k\v\2\k\8\2\z\4\8\y\y\d\o\d\3\h\d\w\z\o\1\u\e\g\b\a\d\q\v\2\3\d\3\y\o\i\w\8\x\s\z\s\q\d\t\n\j\r\m ]] 00:07:52.730 05:54:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:52.730 05:54:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:52.730 [2024-07-13 05:54:44.375435] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:52.730 [2024-07-13 05:54:44.375532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75032 ] 00:07:52.989 [2024-07-13 05:54:44.514842] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.989 [2024-07-13 05:54:44.556866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.989 [2024-07-13 05:54:44.590327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.248  Copying: 512/512 [B] (average 500 kBps) 00:07:53.248 00:07:53.248 05:54:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ mxfb862bhqznxhrfb1ubbh6xvrymesf3oxjdz9tm11165bufd4lz3tqd8wmvnreay9ky0nyp2x28tnggceub0yvhh0pews4qb5kovyzswmuychg8lp7h6g7re1wfi8qtc7bv9a06u6588uw2f251mr3bjk8obs31gg05hj06q8zdwbipaw1sqx9yrrshipdlmrpsj9hfl36daocjvdj5dia7d6zajhh0oer2t6jgptarmmoct4fm7re3jh4gc5lo84l1s87piczl2sq50lqh1snmarg5yw9zqwov0ocn5jl6ptpjsol2calg760e3ed1mq4go6hwepp62yb0w4vmmsnkgfnpcxkjw8cxuxnuxfv5r7on38lhsqtlsi32g5zgc7isj7oy57s6l5yfgb7rvxszllqotsaddw1oafz121b4fdtbq7c2f08tzjdn16v5yxeemz5qm2bxoewkv2k82z48yydod3hdwzo1uegbadqv23d3yoiw8xszsqdtnjrm == \m\x\f\b\8\6\2\b\h\q\z\n\x\h\r\f\b\1\u\b\b\h\6\x\v\r\y\m\e\s\f\3\o\x\j\d\z\9\t\m\1\1\1\6\5\b\u\f\d\4\l\z\3\t\q\d\8\w\m\v\n\r\e\a\y\9\k\y\0\n\y\p\2\x\2\8\t\n\g\g\c\e\u\b\0\y\v\h\h\0\p\e\w\s\4\q\b\5\k\o\v\y\z\s\w\m\u\y\c\h\g\8\l\p\7\h\6\g\7\r\e\1\w\f\i\8\q\t\c\7\b\v\9\a\0\6\u\6\5\8\8\u\w\2\f\2\5\1\m\r\3\b\j\k\8\o\b\s\3\1\g\g\0\5\h\j\0\6\q\8\z\d\w\b\i\p\a\w\1\s\q\x\9\y\r\r\s\h\i\p\d\l\m\r\p\s\j\9\h\f\l\3\6\d\a\o\c\j\v\d\j\5\d\i\a\7\d\6\z\a\j\h\h\0\o\e\r\2\t\6\j\g\p\t\a\r\m\m\o\c\t\4\f\m\7\r\e\3\j\h\4\g\c\5\l\o\8\4\l\1\s\8\7\p\i\c\z\l\2\s\q\5\0\l\q\h\1\s\n\m\a\r\g\5\y\w\9\z\q\w\o\v\0\o\c\n\5\j\l\6\p\t\p\j\s\o\l\2\c\a\l\g\7\6\0\e\3\e\d\1\m\q\4\g\o\6\h\w\e\p\p\6\2\y\b\0\w\4\v\m\m\s\n\k\g\f\n\p\c\x\k\j\w\8\c\x\u\x\n\u\x\f\v\5\r\7\o\n\3\8\l\h\s\q\t\l\s\i\3\2\g\5\z\g\c\7\i\s\j\7\o\y\5\7\s\6\l\5\y\f\g\b\7\r\v\x\s\z\l\l\q\o\t\s\a\d\d\w\1\o\a\f\z\1\2\1\b\4\f\d\t\b\q\7\c\2\f\0\8\t\z\j\d\n\1\6\v\5\y\x\e\e\m\z\5\q\m\2\b\x\o\e\w\k\v\2\k\8\2\z\4\8\y\y\d\o\d\3\h\d\w\z\o\1\u\e\g\b\a\d\q\v\2\3\d\3\y\o\i\w\8\x\s\z\s\q\d\t\n\j\r\m ]] 00:07:53.248 05:54:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.248 05:54:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:53.248 [2024-07-13 05:54:44.796316] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:53.248 [2024-07-13 05:54:44.796424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75036 ] 00:07:53.248 [2024-07-13 05:54:44.933595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.507 [2024-07-13 05:54:44.976051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.507 [2024-07-13 05:54:45.010013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.507  Copying: 512/512 [B] (average 125 kBps) 00:07:53.507 00:07:53.507 05:54:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ mxfb862bhqznxhrfb1ubbh6xvrymesf3oxjdz9tm11165bufd4lz3tqd8wmvnreay9ky0nyp2x28tnggceub0yvhh0pews4qb5kovyzswmuychg8lp7h6g7re1wfi8qtc7bv9a06u6588uw2f251mr3bjk8obs31gg05hj06q8zdwbipaw1sqx9yrrshipdlmrpsj9hfl36daocjvdj5dia7d6zajhh0oer2t6jgptarmmoct4fm7re3jh4gc5lo84l1s87piczl2sq50lqh1snmarg5yw9zqwov0ocn5jl6ptpjsol2calg760e3ed1mq4go6hwepp62yb0w4vmmsnkgfnpcxkjw8cxuxnuxfv5r7on38lhsqtlsi32g5zgc7isj7oy57s6l5yfgb7rvxszllqotsaddw1oafz121b4fdtbq7c2f08tzjdn16v5yxeemz5qm2bxoewkv2k82z48yydod3hdwzo1uegbadqv23d3yoiw8xszsqdtnjrm == \m\x\f\b\8\6\2\b\h\q\z\n\x\h\r\f\b\1\u\b\b\h\6\x\v\r\y\m\e\s\f\3\o\x\j\d\z\9\t\m\1\1\1\6\5\b\u\f\d\4\l\z\3\t\q\d\8\w\m\v\n\r\e\a\y\9\k\y\0\n\y\p\2\x\2\8\t\n\g\g\c\e\u\b\0\y\v\h\h\0\p\e\w\s\4\q\b\5\k\o\v\y\z\s\w\m\u\y\c\h\g\8\l\p\7\h\6\g\7\r\e\1\w\f\i\8\q\t\c\7\b\v\9\a\0\6\u\6\5\8\8\u\w\2\f\2\5\1\m\r\3\b\j\k\8\o\b\s\3\1\g\g\0\5\h\j\0\6\q\8\z\d\w\b\i\p\a\w\1\s\q\x\9\y\r\r\s\h\i\p\d\l\m\r\p\s\j\9\h\f\l\3\6\d\a\o\c\j\v\d\j\5\d\i\a\7\d\6\z\a\j\h\h\0\o\e\r\2\t\6\j\g\p\t\a\r\m\m\o\c\t\4\f\m\7\r\e\3\j\h\4\g\c\5\l\o\8\4\l\1\s\8\7\p\i\c\z\l\2\s\q\5\0\l\q\h\1\s\n\m\a\r\g\5\y\w\9\z\q\w\o\v\0\o\c\n\5\j\l\6\p\t\p\j\s\o\l\2\c\a\l\g\7\6\0\e\3\e\d\1\m\q\4\g\o\6\h\w\e\p\p\6\2\y\b\0\w\4\v\m\m\s\n\k\g\f\n\p\c\x\k\j\w\8\c\x\u\x\n\u\x\f\v\5\r\7\o\n\3\8\l\h\s\q\t\l\s\i\3\2\g\5\z\g\c\7\i\s\j\7\o\y\5\7\s\6\l\5\y\f\g\b\7\r\v\x\s\z\l\l\q\o\t\s\a\d\d\w\1\o\a\f\z\1\2\1\b\4\f\d\t\b\q\7\c\2\f\0\8\t\z\j\d\n\1\6\v\5\y\x\e\e\m\z\5\q\m\2\b\x\o\e\w\k\v\2\k\8\2\z\4\8\y\y\d\o\d\3\h\d\w\z\o\1\u\e\g\b\a\d\q\v\2\3\d\3\y\o\i\w\8\x\s\z\s\q\d\t\n\j\r\m ]] 00:07:53.507 05:54:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.507 05:54:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:53.507 [2024-07-13 05:54:45.214053] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:53.507 [2024-07-13 05:54:45.214140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75046 ] 00:07:53.766 [2024-07-13 05:54:45.354259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.766 [2024-07-13 05:54:45.396130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.766 [2024-07-13 05:54:45.429728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.026  Copying: 512/512 [B] (average 500 kBps) 00:07:54.026 00:07:54.026 ************************************ 00:07:54.026 END TEST dd_flags_misc 00:07:54.026 ************************************ 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ mxfb862bhqznxhrfb1ubbh6xvrymesf3oxjdz9tm11165bufd4lz3tqd8wmvnreay9ky0nyp2x28tnggceub0yvhh0pews4qb5kovyzswmuychg8lp7h6g7re1wfi8qtc7bv9a06u6588uw2f251mr3bjk8obs31gg05hj06q8zdwbipaw1sqx9yrrshipdlmrpsj9hfl36daocjvdj5dia7d6zajhh0oer2t6jgptarmmoct4fm7re3jh4gc5lo84l1s87piczl2sq50lqh1snmarg5yw9zqwov0ocn5jl6ptpjsol2calg760e3ed1mq4go6hwepp62yb0w4vmmsnkgfnpcxkjw8cxuxnuxfv5r7on38lhsqtlsi32g5zgc7isj7oy57s6l5yfgb7rvxszllqotsaddw1oafz121b4fdtbq7c2f08tzjdn16v5yxeemz5qm2bxoewkv2k82z48yydod3hdwzo1uegbadqv23d3yoiw8xszsqdtnjrm == \m\x\f\b\8\6\2\b\h\q\z\n\x\h\r\f\b\1\u\b\b\h\6\x\v\r\y\m\e\s\f\3\o\x\j\d\z\9\t\m\1\1\1\6\5\b\u\f\d\4\l\z\3\t\q\d\8\w\m\v\n\r\e\a\y\9\k\y\0\n\y\p\2\x\2\8\t\n\g\g\c\e\u\b\0\y\v\h\h\0\p\e\w\s\4\q\b\5\k\o\v\y\z\s\w\m\u\y\c\h\g\8\l\p\7\h\6\g\7\r\e\1\w\f\i\8\q\t\c\7\b\v\9\a\0\6\u\6\5\8\8\u\w\2\f\2\5\1\m\r\3\b\j\k\8\o\b\s\3\1\g\g\0\5\h\j\0\6\q\8\z\d\w\b\i\p\a\w\1\s\q\x\9\y\r\r\s\h\i\p\d\l\m\r\p\s\j\9\h\f\l\3\6\d\a\o\c\j\v\d\j\5\d\i\a\7\d\6\z\a\j\h\h\0\o\e\r\2\t\6\j\g\p\t\a\r\m\m\o\c\t\4\f\m\7\r\e\3\j\h\4\g\c\5\l\o\8\4\l\1\s\8\7\p\i\c\z\l\2\s\q\5\0\l\q\h\1\s\n\m\a\r\g\5\y\w\9\z\q\w\o\v\0\o\c\n\5\j\l\6\p\t\p\j\s\o\l\2\c\a\l\g\7\6\0\e\3\e\d\1\m\q\4\g\o\6\h\w\e\p\p\6\2\y\b\0\w\4\v\m\m\s\n\k\g\f\n\p\c\x\k\j\w\8\c\x\u\x\n\u\x\f\v\5\r\7\o\n\3\8\l\h\s\q\t\l\s\i\3\2\g\5\z\g\c\7\i\s\j\7\o\y\5\7\s\6\l\5\y\f\g\b\7\r\v\x\s\z\l\l\q\o\t\s\a\d\d\w\1\o\a\f\z\1\2\1\b\4\f\d\t\b\q\7\c\2\f\0\8\t\z\j\d\n\1\6\v\5\y\x\e\e\m\z\5\q\m\2\b\x\o\e\w\k\v\2\k\8\2\z\4\8\y\y\d\o\d\3\h\d\w\z\o\1\u\e\g\b\a\d\q\v\2\3\d\3\y\o\i\w\8\x\s\z\s\q\d\t\n\j\r\m ]] 00:07:54.026 00:07:54.026 real 0m3.434s 00:07:54.026 user 0m1.747s 00:07:54.026 sys 0m1.492s 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:54.026 * Second test run, disabling liburing, forcing AIO 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:54.026 ************************************ 00:07:54.026 START TEST dd_flag_append_forced_aio 00:07:54.026 ************************************ 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:54.026 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:54.027 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:54.027 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:54.027 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=x42pykphygb0thia49sb9tw2osxsewv6 00:07:54.027 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:54.027 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:54.027 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:54.027 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=zrvguuzd4vvtidtv027zxfdqp7sd9frw 00:07:54.027 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s x42pykphygb0thia49sb9tw2osxsewv6 00:07:54.027 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s zrvguuzd4vvtidtv027zxfdqp7sd9frw 00:07:54.027 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:54.027 [2024-07-13 05:54:45.707171] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:54.027 [2024-07-13 05:54:45.707265] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75074 ] 00:07:54.286 [2024-07-13 05:54:45.845583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.286 [2024-07-13 05:54:45.887685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.286 [2024-07-13 05:54:45.920864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.544  Copying: 32/32 [B] (average 31 kBps) 00:07:54.544 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ zrvguuzd4vvtidtv027zxfdqp7sd9frwx42pykphygb0thia49sb9tw2osxsewv6 == \z\r\v\g\u\u\z\d\4\v\v\t\i\d\t\v\0\2\7\z\x\f\d\q\p\7\s\d\9\f\r\w\x\4\2\p\y\k\p\h\y\g\b\0\t\h\i\a\4\9\s\b\9\t\w\2\o\s\x\s\e\w\v\6 ]] 00:07:54.544 00:07:54.544 real 0m0.449s 00:07:54.544 user 0m0.230s 00:07:54.544 sys 0m0.095s 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.544 ************************************ 00:07:54.544 END TEST dd_flag_append_forced_aio 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:54.544 ************************************ 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:54.544 ************************************ 00:07:54.544 START TEST dd_flag_directory_forced_aio 00:07:54.544 ************************************ 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.544 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.545 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.545 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:54.545 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:54.545 [2024-07-13 05:54:46.202672] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:54.545 [2024-07-13 05:54:46.202766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75101 ] 00:07:54.803 [2024-07-13 05:54:46.340617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.803 [2024-07-13 05:54:46.381236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.803 [2024-07-13 05:54:46.414284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.803 [2024-07-13 05:54:46.430406] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:54.803 [2024-07-13 05:54:46.430469] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:54.803 [2024-07-13 05:54:46.430487] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:54.803 [2024-07-13 05:54:46.492729] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.063 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:55.063 [2024-07-13 05:54:46.623819] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:55.063 [2024-07-13 05:54:46.623928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75110 ] 00:07:55.063 [2024-07-13 05:54:46.763441] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.322 [2024-07-13 05:54:46.806859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.322 [2024-07-13 05:54:46.839844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.322 [2024-07-13 05:54:46.855702] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:55.322 [2024-07-13 05:54:46.855758] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:55.322 [2024-07-13 05:54:46.855781] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:55.322 [2024-07-13 05:54:46.922992] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:55.322 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:55.322 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:55.322 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:55.322 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:55.322 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:55.322 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:55.322 00:07:55.322 real 0m0.842s 00:07:55.322 user 0m0.433s 00:07:55.322 sys 0m0.200s 00:07:55.322 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.322 ************************************ 00:07:55.322 END TEST dd_flag_directory_forced_aio 00:07:55.322 ************************************ 00:07:55.322 05:54:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:55.322 05:54:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:55.322 05:54:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:55.322 05:54:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.322 05:54:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.322 05:54:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:55.322 ************************************ 00:07:55.322 START TEST dd_flag_nofollow_forced_aio 00:07:55.322 ************************************ 00:07:55.322 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:07:55.322 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:55.322 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:55.322 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:55.581 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:55.581 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.581 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:55.581 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.581 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.581 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.581 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.581 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.581 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.581 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.581 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.581 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.581 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.581 [2024-07-13 05:54:47.102334] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:55.581 [2024-07-13 05:54:47.102439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75139 ] 00:07:55.581 [2024-07-13 05:54:47.238982] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.581 [2024-07-13 05:54:47.268362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.581 [2024-07-13 05:54:47.294825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.840 [2024-07-13 05:54:47.309823] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:55.840 [2024-07-13 05:54:47.309860] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:55.840 [2024-07-13 05:54:47.309887] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:55.840 [2024-07-13 05:54:47.366849] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.840 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:55.840 [2024-07-13 05:54:47.487608] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:55.840 [2024-07-13 05:54:47.487714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75148 ] 00:07:56.099 [2024-07-13 05:54:47.625499] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.099 [2024-07-13 05:54:47.657617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.099 [2024-07-13 05:54:47.684182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:56.099 [2024-07-13 05:54:47.697758] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:56.099 [2024-07-13 05:54:47.697849] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:56.099 [2024-07-13 05:54:47.697863] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.099 [2024-07-13 05:54:47.752657] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:56.099 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:56.099 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:56.099 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:56.099 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:56.099 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:56.099 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:56.099 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:56.099 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:56.099 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:56.359 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:56.359 [2024-07-13 05:54:47.880246] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:56.359 [2024-07-13 05:54:47.880380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75155 ] 00:07:56.359 [2024-07-13 05:54:48.018676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.359 [2024-07-13 05:54:48.055174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.359 [2024-07-13 05:54:48.081645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:56.618  Copying: 512/512 [B] (average 500 kBps) 00:07:56.618 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ r8qie861ujpmqu1rid80e30vghiybamkcc9snvjrh507dso4fcwcohh0mifv8uc6hyrxxw6zaypra6u5a0j1bjup3kvhh84pm1ataxsu27cxr1wckhnyx19c4zeb4md2otu1mvcvrf9yu82xshkum4xcnl693or55gxmx1al4c511x1rfu6kly6zj2gxc1javi5hqv803w2qyp1w5amx6od3zw10pyl6jx3o7uh4y0mc5zty78ygpapbhi0zvube72v5zppdbx7rts9cl42j6xc3pxw9irliqrhulclzh3g5t2xd0e9gbnwmmj5cjnctelyryjgv8h66qadbfso16u6jn6bon9slw05nim0cx2o3yskttvsz0kd6rnehan746scakteem92i622veu1q8rkf00urbzd749o0hemguzg7bkvylj5sxb80icjzkxpqkc88mbuhs4jdulywfveszvhi9zws2nop7p45cu7hjq7rfcwqgmrocdteb1di0dob == \r\8\q\i\e\8\6\1\u\j\p\m\q\u\1\r\i\d\8\0\e\3\0\v\g\h\i\y\b\a\m\k\c\c\9\s\n\v\j\r\h\5\0\7\d\s\o\4\f\c\w\c\o\h\h\0\m\i\f\v\8\u\c\6\h\y\r\x\x\w\6\z\a\y\p\r\a\6\u\5\a\0\j\1\b\j\u\p\3\k\v\h\h\8\4\p\m\1\a\t\a\x\s\u\2\7\c\x\r\1\w\c\k\h\n\y\x\1\9\c\4\z\e\b\4\m\d\2\o\t\u\1\m\v\c\v\r\f\9\y\u\8\2\x\s\h\k\u\m\4\x\c\n\l\6\9\3\o\r\5\5\g\x\m\x\1\a\l\4\c\5\1\1\x\1\r\f\u\6\k\l\y\6\z\j\2\g\x\c\1\j\a\v\i\5\h\q\v\8\0\3\w\2\q\y\p\1\w\5\a\m\x\6\o\d\3\z\w\1\0\p\y\l\6\j\x\3\o\7\u\h\4\y\0\m\c\5\z\t\y\7\8\y\g\p\a\p\b\h\i\0\z\v\u\b\e\7\2\v\5\z\p\p\d\b\x\7\r\t\s\9\c\l\4\2\j\6\x\c\3\p\x\w\9\i\r\l\i\q\r\h\u\l\c\l\z\h\3\g\5\t\2\x\d\0\e\9\g\b\n\w\m\m\j\5\c\j\n\c\t\e\l\y\r\y\j\g\v\8\h\6\6\q\a\d\b\f\s\o\1\6\u\6\j\n\6\b\o\n\9\s\l\w\0\5\n\i\m\0\c\x\2\o\3\y\s\k\t\t\v\s\z\0\k\d\6\r\n\e\h\a\n\7\4\6\s\c\a\k\t\e\e\m\9\2\i\6\2\2\v\e\u\1\q\8\r\k\f\0\0\u\r\b\z\d\7\4\9\o\0\h\e\m\g\u\z\g\7\b\k\v\y\l\j\5\s\x\b\8\0\i\c\j\z\k\x\p\q\k\c\8\8\m\b\u\h\s\4\j\d\u\l\y\w\f\v\e\s\z\v\h\i\9\z\w\s\2\n\o\p\7\p\4\5\c\u\7\h\j\q\7\r\f\c\w\q\g\m\r\o\c\d\t\e\b\1\d\i\0\d\o\b ]] 00:07:56.618 00:07:56.618 real 0m1.181s 00:07:56.618 user 0m0.591s 00:07:56.618 sys 0m0.265s 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.618 ************************************ 00:07:56.618 END TEST dd_flag_nofollow_forced_aio 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:56.618 ************************************ 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:56.618 ************************************ 00:07:56.618 START TEST dd_flag_noatime_forced_aio 00:07:56.618 ************************************ 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1720850088 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1720850088 00:07:56.618 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:57.995 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.995 [2024-07-13 05:54:49.355695] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:57.995 [2024-07-13 05:54:49.355802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75191 ] 00:07:57.995 [2024-07-13 05:54:49.495986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.995 [2024-07-13 05:54:49.539144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.995 [2024-07-13 05:54:49.572013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.254  Copying: 512/512 [B] (average 500 kBps) 00:07:58.254 00:07:58.254 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.254 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1720850088 )) 00:07:58.254 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.254 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1720850088 )) 00:07:58.254 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.254 [2024-07-13 05:54:49.814164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:58.254 [2024-07-13 05:54:49.814276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75202 ] 00:07:58.254 [2024-07-13 05:54:49.953860] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.513 [2024-07-13 05:54:49.995019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.513 [2024-07-13 05:54:50.028035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.513  Copying: 512/512 [B] (average 500 kBps) 00:07:58.513 00:07:58.513 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.513 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1720850090 )) 00:07:58.513 00:07:58.513 real 0m1.931s 00:07:58.513 user 0m0.455s 00:07:58.513 sys 0m0.235s 00:07:58.513 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.513 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.513 ************************************ 00:07:58.513 END TEST dd_flag_noatime_forced_aio 00:07:58.513 ************************************ 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:58.774 ************************************ 00:07:58.774 START TEST dd_flags_misc_forced_aio 00:07:58.774 ************************************ 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:58.774 05:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:58.774 [2024-07-13 05:54:50.321622] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:58.774 [2024-07-13 05:54:50.321725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75229 ] 00:07:58.774 [2024-07-13 05:54:50.459950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.034 [2024-07-13 05:54:50.503120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.034 [2024-07-13 05:54:50.536524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:59.034  Copying: 512/512 [B] (average 500 kBps) 00:07:59.034 00:07:59.034 05:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ r6z9wqrz725hvh7ggva9roe1435fjt9zm7yhlv6e6ky4ced240ajtrz0mteqqc0eeddg1kjwepw1zvvs4ewrcm21gxnqs3d7h4d2qo6cewqt8ozrm8gwzxboravnzvbdzk3d3p54kxzoquz39oo8epra50dqqn1392hno655gkmxkaznrs6f704xtwoiupt8rq7mgy18i5dnjsn5wbt38x05aishjk4wv5zilrt1x6nvfdti4rfpikthvtceqayzzoigza4g868n3cd4eiddnjp5k10czdal80m7lenajyqllxk1wjmtzry4pzxclezabd3ioztls54z4eeuodjfj4rzuw98irbs0b757a7ep2lexg8x0lzegiqvmngarble4r49avlht2icvnltp02tjh2zsktvbyytrdxtstogkq9bu9gh9m12m534jmn1bgtza068446oymp190fh7r1w13fhpa02mxfwx1qvtrtforo49hpwu7h4fkgh8n5c35eh == \r\6\z\9\w\q\r\z\7\2\5\h\v\h\7\g\g\v\a\9\r\o\e\1\4\3\5\f\j\t\9\z\m\7\y\h\l\v\6\e\6\k\y\4\c\e\d\2\4\0\a\j\t\r\z\0\m\t\e\q\q\c\0\e\e\d\d\g\1\k\j\w\e\p\w\1\z\v\v\s\4\e\w\r\c\m\2\1\g\x\n\q\s\3\d\7\h\4\d\2\q\o\6\c\e\w\q\t\8\o\z\r\m\8\g\w\z\x\b\o\r\a\v\n\z\v\b\d\z\k\3\d\3\p\5\4\k\x\z\o\q\u\z\3\9\o\o\8\e\p\r\a\5\0\d\q\q\n\1\3\9\2\h\n\o\6\5\5\g\k\m\x\k\a\z\n\r\s\6\f\7\0\4\x\t\w\o\i\u\p\t\8\r\q\7\m\g\y\1\8\i\5\d\n\j\s\n\5\w\b\t\3\8\x\0\5\a\i\s\h\j\k\4\w\v\5\z\i\l\r\t\1\x\6\n\v\f\d\t\i\4\r\f\p\i\k\t\h\v\t\c\e\q\a\y\z\z\o\i\g\z\a\4\g\8\6\8\n\3\c\d\4\e\i\d\d\n\j\p\5\k\1\0\c\z\d\a\l\8\0\m\7\l\e\n\a\j\y\q\l\l\x\k\1\w\j\m\t\z\r\y\4\p\z\x\c\l\e\z\a\b\d\3\i\o\z\t\l\s\5\4\z\4\e\e\u\o\d\j\f\j\4\r\z\u\w\9\8\i\r\b\s\0\b\7\5\7\a\7\e\p\2\l\e\x\g\8\x\0\l\z\e\g\i\q\v\m\n\g\a\r\b\l\e\4\r\4\9\a\v\l\h\t\2\i\c\v\n\l\t\p\0\2\t\j\h\2\z\s\k\t\v\b\y\y\t\r\d\x\t\s\t\o\g\k\q\9\b\u\9\g\h\9\m\1\2\m\5\3\4\j\m\n\1\b\g\t\z\a\0\6\8\4\4\6\o\y\m\p\1\9\0\f\h\7\r\1\w\1\3\f\h\p\a\0\2\m\x\f\w\x\1\q\v\t\r\t\f\o\r\o\4\9\h\p\w\u\7\h\4\f\k\g\h\8\n\5\c\3\5\e\h ]] 00:07:59.034 05:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:59.034 05:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:59.034 [2024-07-13 05:54:50.758005] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:59.034 [2024-07-13 05:54:50.758098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75236 ] 00:07:59.293 [2024-07-13 05:54:50.898424] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.293 [2024-07-13 05:54:50.940919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.293 [2024-07-13 05:54:50.973570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:59.551  Copying: 512/512 [B] (average 500 kBps) 00:07:59.551 00:07:59.551 05:54:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ r6z9wqrz725hvh7ggva9roe1435fjt9zm7yhlv6e6ky4ced240ajtrz0mteqqc0eeddg1kjwepw1zvvs4ewrcm21gxnqs3d7h4d2qo6cewqt8ozrm8gwzxboravnzvbdzk3d3p54kxzoquz39oo8epra50dqqn1392hno655gkmxkaznrs6f704xtwoiupt8rq7mgy18i5dnjsn5wbt38x05aishjk4wv5zilrt1x6nvfdti4rfpikthvtceqayzzoigza4g868n3cd4eiddnjp5k10czdal80m7lenajyqllxk1wjmtzry4pzxclezabd3ioztls54z4eeuodjfj4rzuw98irbs0b757a7ep2lexg8x0lzegiqvmngarble4r49avlht2icvnltp02tjh2zsktvbyytrdxtstogkq9bu9gh9m12m534jmn1bgtza068446oymp190fh7r1w13fhpa02mxfwx1qvtrtforo49hpwu7h4fkgh8n5c35eh == \r\6\z\9\w\q\r\z\7\2\5\h\v\h\7\g\g\v\a\9\r\o\e\1\4\3\5\f\j\t\9\z\m\7\y\h\l\v\6\e\6\k\y\4\c\e\d\2\4\0\a\j\t\r\z\0\m\t\e\q\q\c\0\e\e\d\d\g\1\k\j\w\e\p\w\1\z\v\v\s\4\e\w\r\c\m\2\1\g\x\n\q\s\3\d\7\h\4\d\2\q\o\6\c\e\w\q\t\8\o\z\r\m\8\g\w\z\x\b\o\r\a\v\n\z\v\b\d\z\k\3\d\3\p\5\4\k\x\z\o\q\u\z\3\9\o\o\8\e\p\r\a\5\0\d\q\q\n\1\3\9\2\h\n\o\6\5\5\g\k\m\x\k\a\z\n\r\s\6\f\7\0\4\x\t\w\o\i\u\p\t\8\r\q\7\m\g\y\1\8\i\5\d\n\j\s\n\5\w\b\t\3\8\x\0\5\a\i\s\h\j\k\4\w\v\5\z\i\l\r\t\1\x\6\n\v\f\d\t\i\4\r\f\p\i\k\t\h\v\t\c\e\q\a\y\z\z\o\i\g\z\a\4\g\8\6\8\n\3\c\d\4\e\i\d\d\n\j\p\5\k\1\0\c\z\d\a\l\8\0\m\7\l\e\n\a\j\y\q\l\l\x\k\1\w\j\m\t\z\r\y\4\p\z\x\c\l\e\z\a\b\d\3\i\o\z\t\l\s\5\4\z\4\e\e\u\o\d\j\f\j\4\r\z\u\w\9\8\i\r\b\s\0\b\7\5\7\a\7\e\p\2\l\e\x\g\8\x\0\l\z\e\g\i\q\v\m\n\g\a\r\b\l\e\4\r\4\9\a\v\l\h\t\2\i\c\v\n\l\t\p\0\2\t\j\h\2\z\s\k\t\v\b\y\y\t\r\d\x\t\s\t\o\g\k\q\9\b\u\9\g\h\9\m\1\2\m\5\3\4\j\m\n\1\b\g\t\z\a\0\6\8\4\4\6\o\y\m\p\1\9\0\f\h\7\r\1\w\1\3\f\h\p\a\0\2\m\x\f\w\x\1\q\v\t\r\t\f\o\r\o\4\9\h\p\w\u\7\h\4\f\k\g\h\8\n\5\c\3\5\e\h ]] 00:07:59.551 05:54:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:59.551 05:54:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:59.551 [2024-07-13 05:54:51.193595] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:59.551 [2024-07-13 05:54:51.193694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75244 ] 00:07:59.809 [2024-07-13 05:54:51.333151] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.809 [2024-07-13 05:54:51.374329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.809 [2024-07-13 05:54:51.406602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:00.067  Copying: 512/512 [B] (average 250 kBps) 00:08:00.067 00:08:00.067 05:54:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ r6z9wqrz725hvh7ggva9roe1435fjt9zm7yhlv6e6ky4ced240ajtrz0mteqqc0eeddg1kjwepw1zvvs4ewrcm21gxnqs3d7h4d2qo6cewqt8ozrm8gwzxboravnzvbdzk3d3p54kxzoquz39oo8epra50dqqn1392hno655gkmxkaznrs6f704xtwoiupt8rq7mgy18i5dnjsn5wbt38x05aishjk4wv5zilrt1x6nvfdti4rfpikthvtceqayzzoigza4g868n3cd4eiddnjp5k10czdal80m7lenajyqllxk1wjmtzry4pzxclezabd3ioztls54z4eeuodjfj4rzuw98irbs0b757a7ep2lexg8x0lzegiqvmngarble4r49avlht2icvnltp02tjh2zsktvbyytrdxtstogkq9bu9gh9m12m534jmn1bgtza068446oymp190fh7r1w13fhpa02mxfwx1qvtrtforo49hpwu7h4fkgh8n5c35eh == \r\6\z\9\w\q\r\z\7\2\5\h\v\h\7\g\g\v\a\9\r\o\e\1\4\3\5\f\j\t\9\z\m\7\y\h\l\v\6\e\6\k\y\4\c\e\d\2\4\0\a\j\t\r\z\0\m\t\e\q\q\c\0\e\e\d\d\g\1\k\j\w\e\p\w\1\z\v\v\s\4\e\w\r\c\m\2\1\g\x\n\q\s\3\d\7\h\4\d\2\q\o\6\c\e\w\q\t\8\o\z\r\m\8\g\w\z\x\b\o\r\a\v\n\z\v\b\d\z\k\3\d\3\p\5\4\k\x\z\o\q\u\z\3\9\o\o\8\e\p\r\a\5\0\d\q\q\n\1\3\9\2\h\n\o\6\5\5\g\k\m\x\k\a\z\n\r\s\6\f\7\0\4\x\t\w\o\i\u\p\t\8\r\q\7\m\g\y\1\8\i\5\d\n\j\s\n\5\w\b\t\3\8\x\0\5\a\i\s\h\j\k\4\w\v\5\z\i\l\r\t\1\x\6\n\v\f\d\t\i\4\r\f\p\i\k\t\h\v\t\c\e\q\a\y\z\z\o\i\g\z\a\4\g\8\6\8\n\3\c\d\4\e\i\d\d\n\j\p\5\k\1\0\c\z\d\a\l\8\0\m\7\l\e\n\a\j\y\q\l\l\x\k\1\w\j\m\t\z\r\y\4\p\z\x\c\l\e\z\a\b\d\3\i\o\z\t\l\s\5\4\z\4\e\e\u\o\d\j\f\j\4\r\z\u\w\9\8\i\r\b\s\0\b\7\5\7\a\7\e\p\2\l\e\x\g\8\x\0\l\z\e\g\i\q\v\m\n\g\a\r\b\l\e\4\r\4\9\a\v\l\h\t\2\i\c\v\n\l\t\p\0\2\t\j\h\2\z\s\k\t\v\b\y\y\t\r\d\x\t\s\t\o\g\k\q\9\b\u\9\g\h\9\m\1\2\m\5\3\4\j\m\n\1\b\g\t\z\a\0\6\8\4\4\6\o\y\m\p\1\9\0\f\h\7\r\1\w\1\3\f\h\p\a\0\2\m\x\f\w\x\1\q\v\t\r\t\f\o\r\o\4\9\h\p\w\u\7\h\4\f\k\g\h\8\n\5\c\3\5\e\h ]] 00:08:00.067 05:54:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:00.067 05:54:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:00.067 [2024-07-13 05:54:51.630854] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:00.067 [2024-07-13 05:54:51.630957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75251 ] 00:08:00.067 [2024-07-13 05:54:51.770335] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.325 [2024-07-13 05:54:51.812401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.325 [2024-07-13 05:54:51.844953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:00.325  Copying: 512/512 [B] (average 500 kBps) 00:08:00.325 00:08:00.325 05:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ r6z9wqrz725hvh7ggva9roe1435fjt9zm7yhlv6e6ky4ced240ajtrz0mteqqc0eeddg1kjwepw1zvvs4ewrcm21gxnqs3d7h4d2qo6cewqt8ozrm8gwzxboravnzvbdzk3d3p54kxzoquz39oo8epra50dqqn1392hno655gkmxkaznrs6f704xtwoiupt8rq7mgy18i5dnjsn5wbt38x05aishjk4wv5zilrt1x6nvfdti4rfpikthvtceqayzzoigza4g868n3cd4eiddnjp5k10czdal80m7lenajyqllxk1wjmtzry4pzxclezabd3ioztls54z4eeuodjfj4rzuw98irbs0b757a7ep2lexg8x0lzegiqvmngarble4r49avlht2icvnltp02tjh2zsktvbyytrdxtstogkq9bu9gh9m12m534jmn1bgtza068446oymp190fh7r1w13fhpa02mxfwx1qvtrtforo49hpwu7h4fkgh8n5c35eh == \r\6\z\9\w\q\r\z\7\2\5\h\v\h\7\g\g\v\a\9\r\o\e\1\4\3\5\f\j\t\9\z\m\7\y\h\l\v\6\e\6\k\y\4\c\e\d\2\4\0\a\j\t\r\z\0\m\t\e\q\q\c\0\e\e\d\d\g\1\k\j\w\e\p\w\1\z\v\v\s\4\e\w\r\c\m\2\1\g\x\n\q\s\3\d\7\h\4\d\2\q\o\6\c\e\w\q\t\8\o\z\r\m\8\g\w\z\x\b\o\r\a\v\n\z\v\b\d\z\k\3\d\3\p\5\4\k\x\z\o\q\u\z\3\9\o\o\8\e\p\r\a\5\0\d\q\q\n\1\3\9\2\h\n\o\6\5\5\g\k\m\x\k\a\z\n\r\s\6\f\7\0\4\x\t\w\o\i\u\p\t\8\r\q\7\m\g\y\1\8\i\5\d\n\j\s\n\5\w\b\t\3\8\x\0\5\a\i\s\h\j\k\4\w\v\5\z\i\l\r\t\1\x\6\n\v\f\d\t\i\4\r\f\p\i\k\t\h\v\t\c\e\q\a\y\z\z\o\i\g\z\a\4\g\8\6\8\n\3\c\d\4\e\i\d\d\n\j\p\5\k\1\0\c\z\d\a\l\8\0\m\7\l\e\n\a\j\y\q\l\l\x\k\1\w\j\m\t\z\r\y\4\p\z\x\c\l\e\z\a\b\d\3\i\o\z\t\l\s\5\4\z\4\e\e\u\o\d\j\f\j\4\r\z\u\w\9\8\i\r\b\s\0\b\7\5\7\a\7\e\p\2\l\e\x\g\8\x\0\l\z\e\g\i\q\v\m\n\g\a\r\b\l\e\4\r\4\9\a\v\l\h\t\2\i\c\v\n\l\t\p\0\2\t\j\h\2\z\s\k\t\v\b\y\y\t\r\d\x\t\s\t\o\g\k\q\9\b\u\9\g\h\9\m\1\2\m\5\3\4\j\m\n\1\b\g\t\z\a\0\6\8\4\4\6\o\y\m\p\1\9\0\f\h\7\r\1\w\1\3\f\h\p\a\0\2\m\x\f\w\x\1\q\v\t\r\t\f\o\r\o\4\9\h\p\w\u\7\h\4\f\k\g\h\8\n\5\c\3\5\e\h ]] 00:08:00.325 05:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:00.325 05:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:00.325 05:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:00.325 05:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:00.325 05:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:00.325 05:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:00.583 [2024-07-13 05:54:52.079095] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:00.583 [2024-07-13 05:54:52.079200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75259 ] 00:08:00.583 [2024-07-13 05:54:52.216055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.583 [2024-07-13 05:54:52.258158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.583 [2024-07-13 05:54:52.290415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:00.842  Copying: 512/512 [B] (average 500 kBps) 00:08:00.842 00:08:00.842 05:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 7gs0rq07esm7dz0dnjzosri9m06uahtneylw6iuxnj60tw53q1ww150bbhkgo9h90o2vbl8mpch6mc6b6smmoed1lj7wa4dirrfwknk022gr5zvjgxsnwpr6816bo8nxfox7u8u9aaonze20wojci1xkg30jo0lradqb8boh1vwv5heej18eb8n0y93bl3inqnc9wprhe62m394med5xhpbzjoyys7mq0u21e2yaupu69r58llh55o0ub7yzggl7fusu3v53w63r49q3kmddr0pe89g27txwo2lhahvm191xeopob7d4pxd0h8olcnq1nck6f3vtpc9qydrvn1kwt9tfah0se6g5edvfaruakq9seha8oztsiwf2weya2ja4cgma7xzngwy5izold5xvma98mx8bzut2xo97krwr651rpd0svke7txiofjnc0et6c1ypkjr48zi4jb6xunpdlp3dn8oqjm4f6o385296wrb2me1p6rywh9wt7goy88uo == \7\g\s\0\r\q\0\7\e\s\m\7\d\z\0\d\n\j\z\o\s\r\i\9\m\0\6\u\a\h\t\n\e\y\l\w\6\i\u\x\n\j\6\0\t\w\5\3\q\1\w\w\1\5\0\b\b\h\k\g\o\9\h\9\0\o\2\v\b\l\8\m\p\c\h\6\m\c\6\b\6\s\m\m\o\e\d\1\l\j\7\w\a\4\d\i\r\r\f\w\k\n\k\0\2\2\g\r\5\z\v\j\g\x\s\n\w\p\r\6\8\1\6\b\o\8\n\x\f\o\x\7\u\8\u\9\a\a\o\n\z\e\2\0\w\o\j\c\i\1\x\k\g\3\0\j\o\0\l\r\a\d\q\b\8\b\o\h\1\v\w\v\5\h\e\e\j\1\8\e\b\8\n\0\y\9\3\b\l\3\i\n\q\n\c\9\w\p\r\h\e\6\2\m\3\9\4\m\e\d\5\x\h\p\b\z\j\o\y\y\s\7\m\q\0\u\2\1\e\2\y\a\u\p\u\6\9\r\5\8\l\l\h\5\5\o\0\u\b\7\y\z\g\g\l\7\f\u\s\u\3\v\5\3\w\6\3\r\4\9\q\3\k\m\d\d\r\0\p\e\8\9\g\2\7\t\x\w\o\2\l\h\a\h\v\m\1\9\1\x\e\o\p\o\b\7\d\4\p\x\d\0\h\8\o\l\c\n\q\1\n\c\k\6\f\3\v\t\p\c\9\q\y\d\r\v\n\1\k\w\t\9\t\f\a\h\0\s\e\6\g\5\e\d\v\f\a\r\u\a\k\q\9\s\e\h\a\8\o\z\t\s\i\w\f\2\w\e\y\a\2\j\a\4\c\g\m\a\7\x\z\n\g\w\y\5\i\z\o\l\d\5\x\v\m\a\9\8\m\x\8\b\z\u\t\2\x\o\9\7\k\r\w\r\6\5\1\r\p\d\0\s\v\k\e\7\t\x\i\o\f\j\n\c\0\e\t\6\c\1\y\p\k\j\r\4\8\z\i\4\j\b\6\x\u\n\p\d\l\p\3\d\n\8\o\q\j\m\4\f\6\o\3\8\5\2\9\6\w\r\b\2\m\e\1\p\6\r\y\w\h\9\w\t\7\g\o\y\8\8\u\o ]] 00:08:00.842 05:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:00.842 05:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:00.842 [2024-07-13 05:54:52.510449] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:00.842 [2024-07-13 05:54:52.510554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75261 ] 00:08:01.100 [2024-07-13 05:54:52.649978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.100 [2024-07-13 05:54:52.692419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.100 [2024-07-13 05:54:52.725545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:01.359  Copying: 512/512 [B] (average 500 kBps) 00:08:01.359 00:08:01.359 05:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 7gs0rq07esm7dz0dnjzosri9m06uahtneylw6iuxnj60tw53q1ww150bbhkgo9h90o2vbl8mpch6mc6b6smmoed1lj7wa4dirrfwknk022gr5zvjgxsnwpr6816bo8nxfox7u8u9aaonze20wojci1xkg30jo0lradqb8boh1vwv5heej18eb8n0y93bl3inqnc9wprhe62m394med5xhpbzjoyys7mq0u21e2yaupu69r58llh55o0ub7yzggl7fusu3v53w63r49q3kmddr0pe89g27txwo2lhahvm191xeopob7d4pxd0h8olcnq1nck6f3vtpc9qydrvn1kwt9tfah0se6g5edvfaruakq9seha8oztsiwf2weya2ja4cgma7xzngwy5izold5xvma98mx8bzut2xo97krwr651rpd0svke7txiofjnc0et6c1ypkjr48zi4jb6xunpdlp3dn8oqjm4f6o385296wrb2me1p6rywh9wt7goy88uo == \7\g\s\0\r\q\0\7\e\s\m\7\d\z\0\d\n\j\z\o\s\r\i\9\m\0\6\u\a\h\t\n\e\y\l\w\6\i\u\x\n\j\6\0\t\w\5\3\q\1\w\w\1\5\0\b\b\h\k\g\o\9\h\9\0\o\2\v\b\l\8\m\p\c\h\6\m\c\6\b\6\s\m\m\o\e\d\1\l\j\7\w\a\4\d\i\r\r\f\w\k\n\k\0\2\2\g\r\5\z\v\j\g\x\s\n\w\p\r\6\8\1\6\b\o\8\n\x\f\o\x\7\u\8\u\9\a\a\o\n\z\e\2\0\w\o\j\c\i\1\x\k\g\3\0\j\o\0\l\r\a\d\q\b\8\b\o\h\1\v\w\v\5\h\e\e\j\1\8\e\b\8\n\0\y\9\3\b\l\3\i\n\q\n\c\9\w\p\r\h\e\6\2\m\3\9\4\m\e\d\5\x\h\p\b\z\j\o\y\y\s\7\m\q\0\u\2\1\e\2\y\a\u\p\u\6\9\r\5\8\l\l\h\5\5\o\0\u\b\7\y\z\g\g\l\7\f\u\s\u\3\v\5\3\w\6\3\r\4\9\q\3\k\m\d\d\r\0\p\e\8\9\g\2\7\t\x\w\o\2\l\h\a\h\v\m\1\9\1\x\e\o\p\o\b\7\d\4\p\x\d\0\h\8\o\l\c\n\q\1\n\c\k\6\f\3\v\t\p\c\9\q\y\d\r\v\n\1\k\w\t\9\t\f\a\h\0\s\e\6\g\5\e\d\v\f\a\r\u\a\k\q\9\s\e\h\a\8\o\z\t\s\i\w\f\2\w\e\y\a\2\j\a\4\c\g\m\a\7\x\z\n\g\w\y\5\i\z\o\l\d\5\x\v\m\a\9\8\m\x\8\b\z\u\t\2\x\o\9\7\k\r\w\r\6\5\1\r\p\d\0\s\v\k\e\7\t\x\i\o\f\j\n\c\0\e\t\6\c\1\y\p\k\j\r\4\8\z\i\4\j\b\6\x\u\n\p\d\l\p\3\d\n\8\o\q\j\m\4\f\6\o\3\8\5\2\9\6\w\r\b\2\m\e\1\p\6\r\y\w\h\9\w\t\7\g\o\y\8\8\u\o ]] 00:08:01.359 05:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:01.359 05:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:01.359 [2024-07-13 05:54:52.954850] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:01.359 [2024-07-13 05:54:52.954948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75268 ] 00:08:01.616 [2024-07-13 05:54:53.094306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.616 [2024-07-13 05:54:53.136737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.616 [2024-07-13 05:54:53.171172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:01.616  Copying: 512/512 [B] (average 500 kBps) 00:08:01.616 00:08:01.875 05:54:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 7gs0rq07esm7dz0dnjzosri9m06uahtneylw6iuxnj60tw53q1ww150bbhkgo9h90o2vbl8mpch6mc6b6smmoed1lj7wa4dirrfwknk022gr5zvjgxsnwpr6816bo8nxfox7u8u9aaonze20wojci1xkg30jo0lradqb8boh1vwv5heej18eb8n0y93bl3inqnc9wprhe62m394med5xhpbzjoyys7mq0u21e2yaupu69r58llh55o0ub7yzggl7fusu3v53w63r49q3kmddr0pe89g27txwo2lhahvm191xeopob7d4pxd0h8olcnq1nck6f3vtpc9qydrvn1kwt9tfah0se6g5edvfaruakq9seha8oztsiwf2weya2ja4cgma7xzngwy5izold5xvma98mx8bzut2xo97krwr651rpd0svke7txiofjnc0et6c1ypkjr48zi4jb6xunpdlp3dn8oqjm4f6o385296wrb2me1p6rywh9wt7goy88uo == \7\g\s\0\r\q\0\7\e\s\m\7\d\z\0\d\n\j\z\o\s\r\i\9\m\0\6\u\a\h\t\n\e\y\l\w\6\i\u\x\n\j\6\0\t\w\5\3\q\1\w\w\1\5\0\b\b\h\k\g\o\9\h\9\0\o\2\v\b\l\8\m\p\c\h\6\m\c\6\b\6\s\m\m\o\e\d\1\l\j\7\w\a\4\d\i\r\r\f\w\k\n\k\0\2\2\g\r\5\z\v\j\g\x\s\n\w\p\r\6\8\1\6\b\o\8\n\x\f\o\x\7\u\8\u\9\a\a\o\n\z\e\2\0\w\o\j\c\i\1\x\k\g\3\0\j\o\0\l\r\a\d\q\b\8\b\o\h\1\v\w\v\5\h\e\e\j\1\8\e\b\8\n\0\y\9\3\b\l\3\i\n\q\n\c\9\w\p\r\h\e\6\2\m\3\9\4\m\e\d\5\x\h\p\b\z\j\o\y\y\s\7\m\q\0\u\2\1\e\2\y\a\u\p\u\6\9\r\5\8\l\l\h\5\5\o\0\u\b\7\y\z\g\g\l\7\f\u\s\u\3\v\5\3\w\6\3\r\4\9\q\3\k\m\d\d\r\0\p\e\8\9\g\2\7\t\x\w\o\2\l\h\a\h\v\m\1\9\1\x\e\o\p\o\b\7\d\4\p\x\d\0\h\8\o\l\c\n\q\1\n\c\k\6\f\3\v\t\p\c\9\q\y\d\r\v\n\1\k\w\t\9\t\f\a\h\0\s\e\6\g\5\e\d\v\f\a\r\u\a\k\q\9\s\e\h\a\8\o\z\t\s\i\w\f\2\w\e\y\a\2\j\a\4\c\g\m\a\7\x\z\n\g\w\y\5\i\z\o\l\d\5\x\v\m\a\9\8\m\x\8\b\z\u\t\2\x\o\9\7\k\r\w\r\6\5\1\r\p\d\0\s\v\k\e\7\t\x\i\o\f\j\n\c\0\e\t\6\c\1\y\p\k\j\r\4\8\z\i\4\j\b\6\x\u\n\p\d\l\p\3\d\n\8\o\q\j\m\4\f\6\o\3\8\5\2\9\6\w\r\b\2\m\e\1\p\6\r\y\w\h\9\w\t\7\g\o\y\8\8\u\o ]] 00:08:01.875 05:54:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:01.875 05:54:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:01.875 [2024-07-13 05:54:53.397092] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:01.875 [2024-07-13 05:54:53.397206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75276 ] 00:08:01.875 [2024-07-13 05:54:53.538141] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.875 [2024-07-13 05:54:53.578887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.133 [2024-07-13 05:54:53.611029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:02.133  Copying: 512/512 [B] (average 500 kBps) 00:08:02.133 00:08:02.133 05:54:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 7gs0rq07esm7dz0dnjzosri9m06uahtneylw6iuxnj60tw53q1ww150bbhkgo9h90o2vbl8mpch6mc6b6smmoed1lj7wa4dirrfwknk022gr5zvjgxsnwpr6816bo8nxfox7u8u9aaonze20wojci1xkg30jo0lradqb8boh1vwv5heej18eb8n0y93bl3inqnc9wprhe62m394med5xhpbzjoyys7mq0u21e2yaupu69r58llh55o0ub7yzggl7fusu3v53w63r49q3kmddr0pe89g27txwo2lhahvm191xeopob7d4pxd0h8olcnq1nck6f3vtpc9qydrvn1kwt9tfah0se6g5edvfaruakq9seha8oztsiwf2weya2ja4cgma7xzngwy5izold5xvma98mx8bzut2xo97krwr651rpd0svke7txiofjnc0et6c1ypkjr48zi4jb6xunpdlp3dn8oqjm4f6o385296wrb2me1p6rywh9wt7goy88uo == \7\g\s\0\r\q\0\7\e\s\m\7\d\z\0\d\n\j\z\o\s\r\i\9\m\0\6\u\a\h\t\n\e\y\l\w\6\i\u\x\n\j\6\0\t\w\5\3\q\1\w\w\1\5\0\b\b\h\k\g\o\9\h\9\0\o\2\v\b\l\8\m\p\c\h\6\m\c\6\b\6\s\m\m\o\e\d\1\l\j\7\w\a\4\d\i\r\r\f\w\k\n\k\0\2\2\g\r\5\z\v\j\g\x\s\n\w\p\r\6\8\1\6\b\o\8\n\x\f\o\x\7\u\8\u\9\a\a\o\n\z\e\2\0\w\o\j\c\i\1\x\k\g\3\0\j\o\0\l\r\a\d\q\b\8\b\o\h\1\v\w\v\5\h\e\e\j\1\8\e\b\8\n\0\y\9\3\b\l\3\i\n\q\n\c\9\w\p\r\h\e\6\2\m\3\9\4\m\e\d\5\x\h\p\b\z\j\o\y\y\s\7\m\q\0\u\2\1\e\2\y\a\u\p\u\6\9\r\5\8\l\l\h\5\5\o\0\u\b\7\y\z\g\g\l\7\f\u\s\u\3\v\5\3\w\6\3\r\4\9\q\3\k\m\d\d\r\0\p\e\8\9\g\2\7\t\x\w\o\2\l\h\a\h\v\m\1\9\1\x\e\o\p\o\b\7\d\4\p\x\d\0\h\8\o\l\c\n\q\1\n\c\k\6\f\3\v\t\p\c\9\q\y\d\r\v\n\1\k\w\t\9\t\f\a\h\0\s\e\6\g\5\e\d\v\f\a\r\u\a\k\q\9\s\e\h\a\8\o\z\t\s\i\w\f\2\w\e\y\a\2\j\a\4\c\g\m\a\7\x\z\n\g\w\y\5\i\z\o\l\d\5\x\v\m\a\9\8\m\x\8\b\z\u\t\2\x\o\9\7\k\r\w\r\6\5\1\r\p\d\0\s\v\k\e\7\t\x\i\o\f\j\n\c\0\e\t\6\c\1\y\p\k\j\r\4\8\z\i\4\j\b\6\x\u\n\p\d\l\p\3\d\n\8\o\q\j\m\4\f\6\o\3\8\5\2\9\6\w\r\b\2\m\e\1\p\6\r\y\w\h\9\w\t\7\g\o\y\8\8\u\o ]] 00:08:02.133 00:08:02.133 real 0m3.518s 00:08:02.133 user 0m1.741s 00:08:02.133 sys 0m0.803s 00:08:02.133 05:54:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.133 05:54:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:02.133 ************************************ 00:08:02.133 END TEST dd_flags_misc_forced_aio 00:08:02.133 ************************************ 00:08:02.133 05:54:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:02.133 05:54:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:02.133 05:54:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:02.133 05:54:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:02.133 00:08:02.133 real 0m16.314s 00:08:02.133 user 0m7.077s 00:08:02.133 sys 0m4.579s 00:08:02.133 05:54:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.133 05:54:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:02.133 ************************************ 00:08:02.133 END TEST spdk_dd_posix 00:08:02.133 ************************************ 00:08:02.392 05:54:53 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:02.392 05:54:53 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:02.392 05:54:53 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.392 05:54:53 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.392 05:54:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:02.392 ************************************ 00:08:02.392 START TEST spdk_dd_malloc 00:08:02.392 ************************************ 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:02.392 * Looking for test storage... 00:08:02.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:02.392 ************************************ 00:08:02.392 START TEST dd_malloc_copy 00:08:02.392 ************************************ 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:02.392 05:54:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:02.392 [2024-07-13 05:54:54.022512] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:02.392 [2024-07-13 05:54:54.023280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75350 ] 00:08:02.392 { 00:08:02.392 "subsystems": [ 00:08:02.392 { 00:08:02.392 "subsystem": "bdev", 00:08:02.392 "config": [ 00:08:02.392 { 00:08:02.392 "params": { 00:08:02.392 "block_size": 512, 00:08:02.392 "num_blocks": 1048576, 00:08:02.392 "name": "malloc0" 00:08:02.392 }, 00:08:02.392 "method": "bdev_malloc_create" 00:08:02.392 }, 00:08:02.392 { 00:08:02.392 "params": { 00:08:02.392 "block_size": 512, 00:08:02.392 "num_blocks": 1048576, 00:08:02.392 "name": "malloc1" 00:08:02.392 }, 00:08:02.392 "method": "bdev_malloc_create" 00:08:02.392 }, 00:08:02.392 { 00:08:02.392 "method": "bdev_wait_for_examine" 00:08:02.392 } 00:08:02.392 ] 00:08:02.392 } 00:08:02.392 ] 00:08:02.392 } 00:08:02.687 [2024-07-13 05:54:54.162469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.687 [2024-07-13 05:54:54.207429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.687 [2024-07-13 05:54:54.241425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:05.852  Copying: 192/512 [MB] (192 MBps) Copying: 389/512 [MB] (197 MBps) Copying: 512/512 [MB] (average 202 MBps) 00:08:05.852 00:08:05.852 05:54:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:05.852 05:54:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:05.852 05:54:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:05.852 05:54:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:05.852 [2024-07-13 05:54:57.373689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:05.852 [2024-07-13 05:54:57.373825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75392 ] 00:08:05.852 { 00:08:05.852 "subsystems": [ 00:08:05.852 { 00:08:05.852 "subsystem": "bdev", 00:08:05.852 "config": [ 00:08:05.852 { 00:08:05.852 "params": { 00:08:05.852 "block_size": 512, 00:08:05.852 "num_blocks": 1048576, 00:08:05.852 "name": "malloc0" 00:08:05.852 }, 00:08:05.852 "method": "bdev_malloc_create" 00:08:05.852 }, 00:08:05.852 { 00:08:05.852 "params": { 00:08:05.852 "block_size": 512, 00:08:05.852 "num_blocks": 1048576, 00:08:05.852 "name": "malloc1" 00:08:05.852 }, 00:08:05.852 "method": "bdev_malloc_create" 00:08:05.852 }, 00:08:05.852 { 00:08:05.852 "method": "bdev_wait_for_examine" 00:08:05.852 } 00:08:05.852 ] 00:08:05.852 } 00:08:05.852 ] 00:08:05.852 } 00:08:05.852 [2024-07-13 05:54:57.512404] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.852 [2024-07-13 05:54:57.559848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.111 [2024-07-13 05:54:57.594327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.884  Copying: 226/512 [MB] (226 MBps) Copying: 437/512 [MB] (211 MBps) Copying: 512/512 [MB] (average 220 MBps) 00:08:08.884 00:08:08.884 00:08:08.884 real 0m6.458s 00:08:08.884 user 0m5.776s 00:08:08.884 sys 0m0.536s 00:08:08.884 05:55:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.884 ************************************ 00:08:08.884 END TEST dd_malloc_copy 00:08:08.884 ************************************ 00:08:08.884 05:55:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:08.884 05:55:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:08:08.884 00:08:08.884 real 0m6.592s 00:08:08.884 user 0m5.834s 00:08:08.884 sys 0m0.609s 00:08:08.884 05:55:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.884 05:55:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:08.884 ************************************ 00:08:08.884 END TEST spdk_dd_malloc 00:08:08.884 ************************************ 00:08:08.884 05:55:00 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:08.884 05:55:00 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:08.884 05:55:00 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:08.884 05:55:00 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.884 05:55:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:08.884 ************************************ 00:08:08.884 START TEST spdk_dd_bdev_to_bdev 00:08:08.884 ************************************ 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:08.884 * Looking for test storage... 00:08:08.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.884 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:09.143 ************************************ 00:08:09.143 START TEST dd_inflate_file 00:08:09.143 ************************************ 00:08:09.143 05:55:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:09.143 [2024-07-13 05:55:00.655881] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:09.143 [2024-07-13 05:55:00.655969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75491 ] 00:08:09.143 [2024-07-13 05:55:00.784838] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.143 [2024-07-13 05:55:00.817702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.143 [2024-07-13 05:55:00.845005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:09.401  Copying: 64/64 [MB] (average 1684 MBps) 00:08:09.401 00:08:09.401 00:08:09.401 real 0m0.407s 00:08:09.401 user 0m0.217s 00:08:09.401 sys 0m0.205s 00:08:09.401 05:55:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.401 05:55:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:09.401 ************************************ 00:08:09.401 END TEST dd_inflate_file 00:08:09.401 ************************************ 00:08:09.401 05:55:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:08:09.401 05:55:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:09.401 05:55:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:09.401 05:55:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:09.401 05:55:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:09.401 05:55:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.401 05:55:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:09.401 05:55:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:09.401 05:55:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:09.401 05:55:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:09.401 ************************************ 00:08:09.401 START TEST dd_copy_to_out_bdev 00:08:09.401 ************************************ 00:08:09.401 05:55:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:09.401 [2024-07-13 05:55:01.118865] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:09.401 [2024-07-13 05:55:01.118947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75524 ] 00:08:09.659 { 00:08:09.659 "subsystems": [ 00:08:09.659 { 00:08:09.659 "subsystem": "bdev", 00:08:09.659 "config": [ 00:08:09.659 { 00:08:09.659 "params": { 00:08:09.659 "trtype": "pcie", 00:08:09.659 "traddr": "0000:00:10.0", 00:08:09.659 "name": "Nvme0" 00:08:09.659 }, 00:08:09.659 "method": "bdev_nvme_attach_controller" 00:08:09.659 }, 00:08:09.659 { 00:08:09.659 "params": { 00:08:09.659 "trtype": "pcie", 00:08:09.659 "traddr": "0000:00:11.0", 00:08:09.659 "name": "Nvme1" 00:08:09.659 }, 00:08:09.659 "method": "bdev_nvme_attach_controller" 00:08:09.659 }, 00:08:09.659 { 00:08:09.659 "method": "bdev_wait_for_examine" 00:08:09.659 } 00:08:09.659 ] 00:08:09.659 } 00:08:09.659 ] 00:08:09.659 } 00:08:09.659 [2024-07-13 05:55:01.247604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.659 [2024-07-13 05:55:01.281393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.659 [2024-07-13 05:55:01.310120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:11.298  Copying: 51/64 [MB] (51 MBps) Copying: 64/64 [MB] (average 51 MBps) 00:08:11.298 00:08:11.298 00:08:11.298 real 0m1.811s 00:08:11.298 user 0m1.634s 00:08:11.298 sys 0m1.480s 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.298 ************************************ 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:11.298 END TEST dd_copy_to_out_bdev 00:08:11.298 ************************************ 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:11.298 ************************************ 00:08:11.298 START TEST dd_offset_magic 00:08:11.298 ************************************ 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:11.298 05:55:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:11.298 [2024-07-13 05:55:02.982774] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:11.298 [2024-07-13 05:55:02.982872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75564 ] 00:08:11.298 { 00:08:11.298 "subsystems": [ 00:08:11.298 { 00:08:11.298 "subsystem": "bdev", 00:08:11.298 "config": [ 00:08:11.298 { 00:08:11.298 "params": { 00:08:11.298 "trtype": "pcie", 00:08:11.298 "traddr": "0000:00:10.0", 00:08:11.298 "name": "Nvme0" 00:08:11.298 }, 00:08:11.298 "method": "bdev_nvme_attach_controller" 00:08:11.298 }, 00:08:11.298 { 00:08:11.298 "params": { 00:08:11.298 "trtype": "pcie", 00:08:11.298 "traddr": "0000:00:11.0", 00:08:11.298 "name": "Nvme1" 00:08:11.298 }, 00:08:11.298 "method": "bdev_nvme_attach_controller" 00:08:11.298 }, 00:08:11.298 { 00:08:11.298 "method": "bdev_wait_for_examine" 00:08:11.298 } 00:08:11.298 ] 00:08:11.298 } 00:08:11.298 ] 00:08:11.298 } 00:08:11.557 [2024-07-13 05:55:03.114452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.557 [2024-07-13 05:55:03.152917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.557 [2024-07-13 05:55:03.182602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.075  Copying: 65/65 [MB] (average 928 MBps) 00:08:12.075 00:08:12.075 05:55:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:12.075 05:55:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:12.075 05:55:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:12.075 05:55:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:12.075 [2024-07-13 05:55:03.637237] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:12.075 [2024-07-13 05:55:03.637330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75584 ] 00:08:12.075 { 00:08:12.075 "subsystems": [ 00:08:12.075 { 00:08:12.075 "subsystem": "bdev", 00:08:12.075 "config": [ 00:08:12.075 { 00:08:12.075 "params": { 00:08:12.075 "trtype": "pcie", 00:08:12.075 "traddr": "0000:00:10.0", 00:08:12.075 "name": "Nvme0" 00:08:12.075 }, 00:08:12.075 "method": "bdev_nvme_attach_controller" 00:08:12.075 }, 00:08:12.075 { 00:08:12.075 "params": { 00:08:12.075 "trtype": "pcie", 00:08:12.075 "traddr": "0000:00:11.0", 00:08:12.075 "name": "Nvme1" 00:08:12.075 }, 00:08:12.075 "method": "bdev_nvme_attach_controller" 00:08:12.075 }, 00:08:12.075 { 00:08:12.075 "method": "bdev_wait_for_examine" 00:08:12.075 } 00:08:12.075 ] 00:08:12.075 } 00:08:12.075 ] 00:08:12.075 } 00:08:12.075 [2024-07-13 05:55:03.773904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.334 [2024-07-13 05:55:03.807087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.334 [2024-07-13 05:55:03.834980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.593  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:12.593 00:08:12.593 05:55:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:12.593 05:55:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:12.593 05:55:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:12.593 05:55:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:12.593 05:55:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:12.593 05:55:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:12.593 05:55:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:12.593 [2024-07-13 05:55:04.182752] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:12.593 [2024-07-13 05:55:04.182865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75595 ] 00:08:12.593 { 00:08:12.593 "subsystems": [ 00:08:12.593 { 00:08:12.593 "subsystem": "bdev", 00:08:12.593 "config": [ 00:08:12.593 { 00:08:12.593 "params": { 00:08:12.593 "trtype": "pcie", 00:08:12.593 "traddr": "0000:00:10.0", 00:08:12.593 "name": "Nvme0" 00:08:12.593 }, 00:08:12.593 "method": "bdev_nvme_attach_controller" 00:08:12.593 }, 00:08:12.593 { 00:08:12.593 "params": { 00:08:12.593 "trtype": "pcie", 00:08:12.593 "traddr": "0000:00:11.0", 00:08:12.593 "name": "Nvme1" 00:08:12.593 }, 00:08:12.593 "method": "bdev_nvme_attach_controller" 00:08:12.593 }, 00:08:12.593 { 00:08:12.593 "method": "bdev_wait_for_examine" 00:08:12.593 } 00:08:12.593 ] 00:08:12.593 } 00:08:12.593 ] 00:08:12.593 } 00:08:12.593 [2024-07-13 05:55:04.319108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.852 [2024-07-13 05:55:04.352758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.852 [2024-07-13 05:55:04.382140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.111  Copying: 65/65 [MB] (average 1226 MBps) 00:08:13.111 00:08:13.111 05:55:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:13.111 05:55:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:13.111 05:55:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:13.111 05:55:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:13.111 [2024-07-13 05:55:04.829648] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:13.111 [2024-07-13 05:55:04.829747] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75615 ] 00:08:13.370 { 00:08:13.370 "subsystems": [ 00:08:13.370 { 00:08:13.370 "subsystem": "bdev", 00:08:13.370 "config": [ 00:08:13.370 { 00:08:13.370 "params": { 00:08:13.370 "trtype": "pcie", 00:08:13.370 "traddr": "0000:00:10.0", 00:08:13.370 "name": "Nvme0" 00:08:13.370 }, 00:08:13.370 "method": "bdev_nvme_attach_controller" 00:08:13.370 }, 00:08:13.370 { 00:08:13.370 "params": { 00:08:13.370 "trtype": "pcie", 00:08:13.370 "traddr": "0000:00:11.0", 00:08:13.370 "name": "Nvme1" 00:08:13.370 }, 00:08:13.370 "method": "bdev_nvme_attach_controller" 00:08:13.370 }, 00:08:13.370 { 00:08:13.370 "method": "bdev_wait_for_examine" 00:08:13.370 } 00:08:13.370 ] 00:08:13.370 } 00:08:13.370 ] 00:08:13.370 } 00:08:13.370 [2024-07-13 05:55:04.966605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.370 [2024-07-13 05:55:05.003276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.370 [2024-07-13 05:55:05.033574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.628  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:13.628 00:08:13.628 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:13.628 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:13.628 00:08:13.628 real 0m2.401s 00:08:13.628 user 0m1.779s 00:08:13.628 sys 0m0.621s 00:08:13.628 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.628 ************************************ 00:08:13.628 END TEST dd_offset_magic 00:08:13.628 ************************************ 00:08:13.628 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:13.887 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:08:13.887 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:13.887 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:13.887 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:13.887 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:13.887 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:13.887 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:13.887 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:13.887 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:13.887 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:13.887 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:13.887 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:13.887 [2024-07-13 05:55:05.428518] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:13.887 [2024-07-13 05:55:05.429402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75652 ] 00:08:13.887 { 00:08:13.887 "subsystems": [ 00:08:13.887 { 00:08:13.887 "subsystem": "bdev", 00:08:13.887 "config": [ 00:08:13.887 { 00:08:13.887 "params": { 00:08:13.887 "trtype": "pcie", 00:08:13.887 "traddr": "0000:00:10.0", 00:08:13.887 "name": "Nvme0" 00:08:13.887 }, 00:08:13.887 "method": "bdev_nvme_attach_controller" 00:08:13.887 }, 00:08:13.887 { 00:08:13.887 "params": { 00:08:13.887 "trtype": "pcie", 00:08:13.887 "traddr": "0000:00:11.0", 00:08:13.887 "name": "Nvme1" 00:08:13.887 }, 00:08:13.887 "method": "bdev_nvme_attach_controller" 00:08:13.887 }, 00:08:13.887 { 00:08:13.887 "method": "bdev_wait_for_examine" 00:08:13.887 } 00:08:13.887 ] 00:08:13.887 } 00:08:13.887 ] 00:08:13.887 } 00:08:13.887 [2024-07-13 05:55:05.567682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.887 [2024-07-13 05:55:05.603852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.145 [2024-07-13 05:55:05.632633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.403  Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:14.403 00:08:14.403 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:14.403 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:14.403 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:14.403 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:14.403 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:14.404 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:14.404 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:14.404 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:14.404 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:14.404 05:55:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:14.404 [2024-07-13 05:55:05.962536] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:14.404 [2024-07-13 05:55:05.962626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75662 ] 00:08:14.404 { 00:08:14.404 "subsystems": [ 00:08:14.404 { 00:08:14.404 "subsystem": "bdev", 00:08:14.404 "config": [ 00:08:14.404 { 00:08:14.404 "params": { 00:08:14.404 "trtype": "pcie", 00:08:14.404 "traddr": "0000:00:10.0", 00:08:14.404 "name": "Nvme0" 00:08:14.404 }, 00:08:14.404 "method": "bdev_nvme_attach_controller" 00:08:14.404 }, 00:08:14.404 { 00:08:14.404 "params": { 00:08:14.404 "trtype": "pcie", 00:08:14.404 "traddr": "0000:00:11.0", 00:08:14.404 "name": "Nvme1" 00:08:14.404 }, 00:08:14.404 "method": "bdev_nvme_attach_controller" 00:08:14.404 }, 00:08:14.404 { 00:08:14.404 "method": "bdev_wait_for_examine" 00:08:14.404 } 00:08:14.404 ] 00:08:14.404 } 00:08:14.404 ] 00:08:14.404 } 00:08:14.404 [2024-07-13 05:55:06.097437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.404 [2024-07-13 05:55:06.130111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.662 [2024-07-13 05:55:06.158697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.921  Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:14.921 00:08:14.921 05:55:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:14.921 00:08:14.921 real 0m5.957s 00:08:14.921 user 0m4.477s 00:08:14.921 sys 0m2.821s 00:08:14.921 05:55:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.921 ************************************ 00:08:14.921 END TEST spdk_dd_bdev_to_bdev 00:08:14.921 ************************************ 00:08:14.921 05:55:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:14.921 05:55:06 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:14.921 05:55:06 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:14.921 05:55:06 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:14.921 05:55:06 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.921 05:55:06 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.921 05:55:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:14.921 ************************************ 00:08:14.921 START TEST spdk_dd_uring 00:08:14.921 ************************************ 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:14.921 * Looking for test storage... 00:08:14.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:14.921 ************************************ 00:08:14.921 START TEST dd_uring_copy 00:08:14.921 ************************************ 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:08:14.921 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:08:14.922 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:08:14.922 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:08:14.922 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:14.922 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:14.922 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:14.922 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:14.922 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:14.922 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:14.922 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:14.922 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:14.922 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:15.181 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=bp3eg1rcem79zxm2fnosuvx7l48dt6qtnprkeosqrof7442gz4ej6ou7b54qfrljlurmahrf7297q12y3vzzrr0iufnf2odjiuw4v2pr7cm1dtkqa2s4vbezynjei79zs5agijggyw7wjqzcl3ot270pagdeo60rvqbe57o6iexqu61j598791h80bv2wuu705pajnpgd3ye9e1zct92rnimfeinxijf2ljhe7oiml2m2gg9jgwev2bn6q7545ql2wwehytb7f8hmwqsjeybg6p4ng5bx7jz0ug1m5plefbg3wpi5y0yo57hkrt96vlg0lv0ex8ebuaiflfwsqskn4btp7as04tfoggs5y76tob5e21j2tuyd8krv4cqe9hfc8zs6gg1yg38n3ylow4r4xg5urg4oks16rnztj0qwd870vb7z550wu5jq9y19d447ggshng6uh6u434dor51rri7ggnprqocv9crm51iccz5bjn4vuhkq63ceyfra4zquftjtcmaafvj7az3o6duqlm83cx3fruljr9ztsbqv8849xov0xa1eawqj7vxrh02ydmzrexatc2ghl3viwb4dy7nh736oy0fokzsw43te6npxgh5c3rbq1ekrhfu2wgwwaq5fm23wnr20akdf07mqswwnyjv9gcdfbz7sz4d05tb1fskhjq3b576k7ihdt059vgt5xzkq7rqgpuo3os7qt02lsf8iutiayae4f9liql3hg6xbym4wgfvi5cjwzvrnfc3lk02e0hzrxwbp9yx4u7zg178fdvp6aofz4s8m47sy8oavmgk813vusqrjw5ydne6nkaqxssgf8lhkr9v9kzrzvduiqxtn2hp0jsqw9z5zjn5f326wcqo4whczcs3yehdlpe0gmjv1dxaz22nhq05j9tt01ppjadi1kjle2l04jytpekxm5fy44flld1ss9f9953axhjj20ok003gu7iwsd58i3liquikpo82x4mriw8arasppnzbad01iewl 00:08:15.181 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo bp3eg1rcem79zxm2fnosuvx7l48dt6qtnprkeosqrof7442gz4ej6ou7b54qfrljlurmahrf7297q12y3vzzrr0iufnf2odjiuw4v2pr7cm1dtkqa2s4vbezynjei79zs5agijggyw7wjqzcl3ot270pagdeo60rvqbe57o6iexqu61j598791h80bv2wuu705pajnpgd3ye9e1zct92rnimfeinxijf2ljhe7oiml2m2gg9jgwev2bn6q7545ql2wwehytb7f8hmwqsjeybg6p4ng5bx7jz0ug1m5plefbg3wpi5y0yo57hkrt96vlg0lv0ex8ebuaiflfwsqskn4btp7as04tfoggs5y76tob5e21j2tuyd8krv4cqe9hfc8zs6gg1yg38n3ylow4r4xg5urg4oks16rnztj0qwd870vb7z550wu5jq9y19d447ggshng6uh6u434dor51rri7ggnprqocv9crm51iccz5bjn4vuhkq63ceyfra4zquftjtcmaafvj7az3o6duqlm83cx3fruljr9ztsbqv8849xov0xa1eawqj7vxrh02ydmzrexatc2ghl3viwb4dy7nh736oy0fokzsw43te6npxgh5c3rbq1ekrhfu2wgwwaq5fm23wnr20akdf07mqswwnyjv9gcdfbz7sz4d05tb1fskhjq3b576k7ihdt059vgt5xzkq7rqgpuo3os7qt02lsf8iutiayae4f9liql3hg6xbym4wgfvi5cjwzvrnfc3lk02e0hzrxwbp9yx4u7zg178fdvp6aofz4s8m47sy8oavmgk813vusqrjw5ydne6nkaqxssgf8lhkr9v9kzrzvduiqxtn2hp0jsqw9z5zjn5f326wcqo4whczcs3yehdlpe0gmjv1dxaz22nhq05j9tt01ppjadi1kjle2l04jytpekxm5fy44flld1ss9f9953axhjj20ok003gu7iwsd58i3liquikpo82x4mriw8arasppnzbad01iewl 00:08:15.181 05:55:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:15.181 [2024-07-13 05:55:06.703538] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:15.181 [2024-07-13 05:55:06.703649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75732 ] 00:08:15.181 [2024-07-13 05:55:06.836569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.181 [2024-07-13 05:55:06.875622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.181 [2024-07-13 05:55:06.907985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:16.004  Copying: 511/511 [MB] (average 1264 MBps) 00:08:16.004 00:08:16.004 05:55:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:16.004 05:55:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:16.004 05:55:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:16.004 05:55:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:16.262 [2024-07-13 05:55:07.755702] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:16.263 [2024-07-13 05:55:07.755788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75748 ] 00:08:16.263 { 00:08:16.263 "subsystems": [ 00:08:16.263 { 00:08:16.263 "subsystem": "bdev", 00:08:16.263 "config": [ 00:08:16.263 { 00:08:16.263 "params": { 00:08:16.263 "block_size": 512, 00:08:16.263 "num_blocks": 1048576, 00:08:16.263 "name": "malloc0" 00:08:16.263 }, 00:08:16.263 "method": "bdev_malloc_create" 00:08:16.263 }, 00:08:16.263 { 00:08:16.263 "params": { 00:08:16.263 "filename": "/dev/zram1", 00:08:16.263 "name": "uring0" 00:08:16.263 }, 00:08:16.263 "method": "bdev_uring_create" 00:08:16.263 }, 00:08:16.263 { 00:08:16.263 "method": "bdev_wait_for_examine" 00:08:16.263 } 00:08:16.263 ] 00:08:16.263 } 00:08:16.263 ] 00:08:16.263 } 00:08:16.263 [2024-07-13 05:55:07.893025] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.263 [2024-07-13 05:55:07.929217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.263 [2024-07-13 05:55:07.958010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:19.151  Copying: 185/512 [MB] (185 MBps) Copying: 403/512 [MB] (217 MBps) Copying: 512/512 [MB] (average 204 MBps) 00:08:19.151 00:08:19.151 05:55:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:19.151 05:55:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:19.151 05:55:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:19.151 05:55:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:19.151 [2024-07-13 05:55:10.847066] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:19.151 [2024-07-13 05:55:10.847152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75792 ] 00:08:19.151 { 00:08:19.151 "subsystems": [ 00:08:19.151 { 00:08:19.151 "subsystem": "bdev", 00:08:19.151 "config": [ 00:08:19.151 { 00:08:19.151 "params": { 00:08:19.151 "block_size": 512, 00:08:19.151 "num_blocks": 1048576, 00:08:19.151 "name": "malloc0" 00:08:19.151 }, 00:08:19.151 "method": "bdev_malloc_create" 00:08:19.151 }, 00:08:19.151 { 00:08:19.151 "params": { 00:08:19.151 "filename": "/dev/zram1", 00:08:19.151 "name": "uring0" 00:08:19.151 }, 00:08:19.151 "method": "bdev_uring_create" 00:08:19.151 }, 00:08:19.151 { 00:08:19.151 "method": "bdev_wait_for_examine" 00:08:19.151 } 00:08:19.151 ] 00:08:19.151 } 00:08:19.151 ] 00:08:19.151 } 00:08:19.410 [2024-07-13 05:55:10.983694] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.410 [2024-07-13 05:55:11.020501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.410 [2024-07-13 05:55:11.050497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.923  Copying: 154/512 [MB] (154 MBps) Copying: 323/512 [MB] (168 MBps) Copying: 476/512 [MB] (153 MBps) Copying: 512/512 [MB] (average 160 MBps) 00:08:22.923 00:08:22.923 05:55:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:22.923 05:55:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ bp3eg1rcem79zxm2fnosuvx7l48dt6qtnprkeosqrof7442gz4ej6ou7b54qfrljlurmahrf7297q12y3vzzrr0iufnf2odjiuw4v2pr7cm1dtkqa2s4vbezynjei79zs5agijggyw7wjqzcl3ot270pagdeo60rvqbe57o6iexqu61j598791h80bv2wuu705pajnpgd3ye9e1zct92rnimfeinxijf2ljhe7oiml2m2gg9jgwev2bn6q7545ql2wwehytb7f8hmwqsjeybg6p4ng5bx7jz0ug1m5plefbg3wpi5y0yo57hkrt96vlg0lv0ex8ebuaiflfwsqskn4btp7as04tfoggs5y76tob5e21j2tuyd8krv4cqe9hfc8zs6gg1yg38n3ylow4r4xg5urg4oks16rnztj0qwd870vb7z550wu5jq9y19d447ggshng6uh6u434dor51rri7ggnprqocv9crm51iccz5bjn4vuhkq63ceyfra4zquftjtcmaafvj7az3o6duqlm83cx3fruljr9ztsbqv8849xov0xa1eawqj7vxrh02ydmzrexatc2ghl3viwb4dy7nh736oy0fokzsw43te6npxgh5c3rbq1ekrhfu2wgwwaq5fm23wnr20akdf07mqswwnyjv9gcdfbz7sz4d05tb1fskhjq3b576k7ihdt059vgt5xzkq7rqgpuo3os7qt02lsf8iutiayae4f9liql3hg6xbym4wgfvi5cjwzvrnfc3lk02e0hzrxwbp9yx4u7zg178fdvp6aofz4s8m47sy8oavmgk813vusqrjw5ydne6nkaqxssgf8lhkr9v9kzrzvduiqxtn2hp0jsqw9z5zjn5f326wcqo4whczcs3yehdlpe0gmjv1dxaz22nhq05j9tt01ppjadi1kjle2l04jytpekxm5fy44flld1ss9f9953axhjj20ok003gu7iwsd58i3liquikpo82x4mriw8arasppnzbad01iewl == \b\p\3\e\g\1\r\c\e\m\7\9\z\x\m\2\f\n\o\s\u\v\x\7\l\4\8\d\t\6\q\t\n\p\r\k\e\o\s\q\r\o\f\7\4\4\2\g\z\4\e\j\6\o\u\7\b\5\4\q\f\r\l\j\l\u\r\m\a\h\r\f\7\2\9\7\q\1\2\y\3\v\z\z\r\r\0\i\u\f\n\f\2\o\d\j\i\u\w\4\v\2\p\r\7\c\m\1\d\t\k\q\a\2\s\4\v\b\e\z\y\n\j\e\i\7\9\z\s\5\a\g\i\j\g\g\y\w\7\w\j\q\z\c\l\3\o\t\2\7\0\p\a\g\d\e\o\6\0\r\v\q\b\e\5\7\o\6\i\e\x\q\u\6\1\j\5\9\8\7\9\1\h\8\0\b\v\2\w\u\u\7\0\5\p\a\j\n\p\g\d\3\y\e\9\e\1\z\c\t\9\2\r\n\i\m\f\e\i\n\x\i\j\f\2\l\j\h\e\7\o\i\m\l\2\m\2\g\g\9\j\g\w\e\v\2\b\n\6\q\7\5\4\5\q\l\2\w\w\e\h\y\t\b\7\f\8\h\m\w\q\s\j\e\y\b\g\6\p\4\n\g\5\b\x\7\j\z\0\u\g\1\m\5\p\l\e\f\b\g\3\w\p\i\5\y\0\y\o\5\7\h\k\r\t\9\6\v\l\g\0\l\v\0\e\x\8\e\b\u\a\i\f\l\f\w\s\q\s\k\n\4\b\t\p\7\a\s\0\4\t\f\o\g\g\s\5\y\7\6\t\o\b\5\e\2\1\j\2\t\u\y\d\8\k\r\v\4\c\q\e\9\h\f\c\8\z\s\6\g\g\1\y\g\3\8\n\3\y\l\o\w\4\r\4\x\g\5\u\r\g\4\o\k\s\1\6\r\n\z\t\j\0\q\w\d\8\7\0\v\b\7\z\5\5\0\w\u\5\j\q\9\y\1\9\d\4\4\7\g\g\s\h\n\g\6\u\h\6\u\4\3\4\d\o\r\5\1\r\r\i\7\g\g\n\p\r\q\o\c\v\9\c\r\m\5\1\i\c\c\z\5\b\j\n\4\v\u\h\k\q\6\3\c\e\y\f\r\a\4\z\q\u\f\t\j\t\c\m\a\a\f\v\j\7\a\z\3\o\6\d\u\q\l\m\8\3\c\x\3\f\r\u\l\j\r\9\z\t\s\b\q\v\8\8\4\9\x\o\v\0\x\a\1\e\a\w\q\j\7\v\x\r\h\0\2\y\d\m\z\r\e\x\a\t\c\2\g\h\l\3\v\i\w\b\4\d\y\7\n\h\7\3\6\o\y\0\f\o\k\z\s\w\4\3\t\e\6\n\p\x\g\h\5\c\3\r\b\q\1\e\k\r\h\f\u\2\w\g\w\w\a\q\5\f\m\2\3\w\n\r\2\0\a\k\d\f\0\7\m\q\s\w\w\n\y\j\v\9\g\c\d\f\b\z\7\s\z\4\d\0\5\t\b\1\f\s\k\h\j\q\3\b\5\7\6\k\7\i\h\d\t\0\5\9\v\g\t\5\x\z\k\q\7\r\q\g\p\u\o\3\o\s\7\q\t\0\2\l\s\f\8\i\u\t\i\a\y\a\e\4\f\9\l\i\q\l\3\h\g\6\x\b\y\m\4\w\g\f\v\i\5\c\j\w\z\v\r\n\f\c\3\l\k\0\2\e\0\h\z\r\x\w\b\p\9\y\x\4\u\7\z\g\1\7\8\f\d\v\p\6\a\o\f\z\4\s\8\m\4\7\s\y\8\o\a\v\m\g\k\8\1\3\v\u\s\q\r\j\w\5\y\d\n\e\6\n\k\a\q\x\s\s\g\f\8\l\h\k\r\9\v\9\k\z\r\z\v\d\u\i\q\x\t\n\2\h\p\0\j\s\q\w\9\z\5\z\j\n\5\f\3\2\6\w\c\q\o\4\w\h\c\z\c\s\3\y\e\h\d\l\p\e\0\g\m\j\v\1\d\x\a\z\2\2\n\h\q\0\5\j\9\t\t\0\1\p\p\j\a\d\i\1\k\j\l\e\2\l\0\4\j\y\t\p\e\k\x\m\5\f\y\4\4\f\l\l\d\1\s\s\9\f\9\9\5\3\a\x\h\j\j\2\0\o\k\0\0\3\g\u\7\i\w\s\d\5\8\i\3\l\i\q\u\i\k\p\o\8\2\x\4\m\r\i\w\8\a\r\a\s\p\p\n\z\b\a\d\0\1\i\e\w\l ]] 00:08:22.923 05:55:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:22.923 05:55:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ bp3eg1rcem79zxm2fnosuvx7l48dt6qtnprkeosqrof7442gz4ej6ou7b54qfrljlurmahrf7297q12y3vzzrr0iufnf2odjiuw4v2pr7cm1dtkqa2s4vbezynjei79zs5agijggyw7wjqzcl3ot270pagdeo60rvqbe57o6iexqu61j598791h80bv2wuu705pajnpgd3ye9e1zct92rnimfeinxijf2ljhe7oiml2m2gg9jgwev2bn6q7545ql2wwehytb7f8hmwqsjeybg6p4ng5bx7jz0ug1m5plefbg3wpi5y0yo57hkrt96vlg0lv0ex8ebuaiflfwsqskn4btp7as04tfoggs5y76tob5e21j2tuyd8krv4cqe9hfc8zs6gg1yg38n3ylow4r4xg5urg4oks16rnztj0qwd870vb7z550wu5jq9y19d447ggshng6uh6u434dor51rri7ggnprqocv9crm51iccz5bjn4vuhkq63ceyfra4zquftjtcmaafvj7az3o6duqlm83cx3fruljr9ztsbqv8849xov0xa1eawqj7vxrh02ydmzrexatc2ghl3viwb4dy7nh736oy0fokzsw43te6npxgh5c3rbq1ekrhfu2wgwwaq5fm23wnr20akdf07mqswwnyjv9gcdfbz7sz4d05tb1fskhjq3b576k7ihdt059vgt5xzkq7rqgpuo3os7qt02lsf8iutiayae4f9liql3hg6xbym4wgfvi5cjwzvrnfc3lk02e0hzrxwbp9yx4u7zg178fdvp6aofz4s8m47sy8oavmgk813vusqrjw5ydne6nkaqxssgf8lhkr9v9kzrzvduiqxtn2hp0jsqw9z5zjn5f326wcqo4whczcs3yehdlpe0gmjv1dxaz22nhq05j9tt01ppjadi1kjle2l04jytpekxm5fy44flld1ss9f9953axhjj20ok003gu7iwsd58i3liquikpo82x4mriw8arasppnzbad01iewl == \b\p\3\e\g\1\r\c\e\m\7\9\z\x\m\2\f\n\o\s\u\v\x\7\l\4\8\d\t\6\q\t\n\p\r\k\e\o\s\q\r\o\f\7\4\4\2\g\z\4\e\j\6\o\u\7\b\5\4\q\f\r\l\j\l\u\r\m\a\h\r\f\7\2\9\7\q\1\2\y\3\v\z\z\r\r\0\i\u\f\n\f\2\o\d\j\i\u\w\4\v\2\p\r\7\c\m\1\d\t\k\q\a\2\s\4\v\b\e\z\y\n\j\e\i\7\9\z\s\5\a\g\i\j\g\g\y\w\7\w\j\q\z\c\l\3\o\t\2\7\0\p\a\g\d\e\o\6\0\r\v\q\b\e\5\7\o\6\i\e\x\q\u\6\1\j\5\9\8\7\9\1\h\8\0\b\v\2\w\u\u\7\0\5\p\a\j\n\p\g\d\3\y\e\9\e\1\z\c\t\9\2\r\n\i\m\f\e\i\n\x\i\j\f\2\l\j\h\e\7\o\i\m\l\2\m\2\g\g\9\j\g\w\e\v\2\b\n\6\q\7\5\4\5\q\l\2\w\w\e\h\y\t\b\7\f\8\h\m\w\q\s\j\e\y\b\g\6\p\4\n\g\5\b\x\7\j\z\0\u\g\1\m\5\p\l\e\f\b\g\3\w\p\i\5\y\0\y\o\5\7\h\k\r\t\9\6\v\l\g\0\l\v\0\e\x\8\e\b\u\a\i\f\l\f\w\s\q\s\k\n\4\b\t\p\7\a\s\0\4\t\f\o\g\g\s\5\y\7\6\t\o\b\5\e\2\1\j\2\t\u\y\d\8\k\r\v\4\c\q\e\9\h\f\c\8\z\s\6\g\g\1\y\g\3\8\n\3\y\l\o\w\4\r\4\x\g\5\u\r\g\4\o\k\s\1\6\r\n\z\t\j\0\q\w\d\8\7\0\v\b\7\z\5\5\0\w\u\5\j\q\9\y\1\9\d\4\4\7\g\g\s\h\n\g\6\u\h\6\u\4\3\4\d\o\r\5\1\r\r\i\7\g\g\n\p\r\q\o\c\v\9\c\r\m\5\1\i\c\c\z\5\b\j\n\4\v\u\h\k\q\6\3\c\e\y\f\r\a\4\z\q\u\f\t\j\t\c\m\a\a\f\v\j\7\a\z\3\o\6\d\u\q\l\m\8\3\c\x\3\f\r\u\l\j\r\9\z\t\s\b\q\v\8\8\4\9\x\o\v\0\x\a\1\e\a\w\q\j\7\v\x\r\h\0\2\y\d\m\z\r\e\x\a\t\c\2\g\h\l\3\v\i\w\b\4\d\y\7\n\h\7\3\6\o\y\0\f\o\k\z\s\w\4\3\t\e\6\n\p\x\g\h\5\c\3\r\b\q\1\e\k\r\h\f\u\2\w\g\w\w\a\q\5\f\m\2\3\w\n\r\2\0\a\k\d\f\0\7\m\q\s\w\w\n\y\j\v\9\g\c\d\f\b\z\7\s\z\4\d\0\5\t\b\1\f\s\k\h\j\q\3\b\5\7\6\k\7\i\h\d\t\0\5\9\v\g\t\5\x\z\k\q\7\r\q\g\p\u\o\3\o\s\7\q\t\0\2\l\s\f\8\i\u\t\i\a\y\a\e\4\f\9\l\i\q\l\3\h\g\6\x\b\y\m\4\w\g\f\v\i\5\c\j\w\z\v\r\n\f\c\3\l\k\0\2\e\0\h\z\r\x\w\b\p\9\y\x\4\u\7\z\g\1\7\8\f\d\v\p\6\a\o\f\z\4\s\8\m\4\7\s\y\8\o\a\v\m\g\k\8\1\3\v\u\s\q\r\j\w\5\y\d\n\e\6\n\k\a\q\x\s\s\g\f\8\l\h\k\r\9\v\9\k\z\r\z\v\d\u\i\q\x\t\n\2\h\p\0\j\s\q\w\9\z\5\z\j\n\5\f\3\2\6\w\c\q\o\4\w\h\c\z\c\s\3\y\e\h\d\l\p\e\0\g\m\j\v\1\d\x\a\z\2\2\n\h\q\0\5\j\9\t\t\0\1\p\p\j\a\d\i\1\k\j\l\e\2\l\0\4\j\y\t\p\e\k\x\m\5\f\y\4\4\f\l\l\d\1\s\s\9\f\9\9\5\3\a\x\h\j\j\2\0\o\k\0\0\3\g\u\7\i\w\s\d\5\8\i\3\l\i\q\u\i\k\p\o\8\2\x\4\m\r\i\w\8\a\r\a\s\p\p\n\z\b\a\d\0\1\i\e\w\l ]] 00:08:22.923 05:55:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:23.490 05:55:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:23.490 05:55:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:23.490 05:55:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:23.490 05:55:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:23.490 [2024-07-13 05:55:15.038592] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:23.490 [2024-07-13 05:55:15.038666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75853 ] 00:08:23.490 { 00:08:23.490 "subsystems": [ 00:08:23.490 { 00:08:23.490 "subsystem": "bdev", 00:08:23.490 "config": [ 00:08:23.490 { 00:08:23.490 "params": { 00:08:23.490 "block_size": 512, 00:08:23.490 "num_blocks": 1048576, 00:08:23.490 "name": "malloc0" 00:08:23.490 }, 00:08:23.490 "method": "bdev_malloc_create" 00:08:23.490 }, 00:08:23.490 { 00:08:23.490 "params": { 00:08:23.490 "filename": "/dev/zram1", 00:08:23.490 "name": "uring0" 00:08:23.490 }, 00:08:23.490 "method": "bdev_uring_create" 00:08:23.490 }, 00:08:23.490 { 00:08:23.490 "method": "bdev_wait_for_examine" 00:08:23.490 } 00:08:23.490 ] 00:08:23.490 } 00:08:23.490 ] 00:08:23.490 } 00:08:23.490 [2024-07-13 05:55:15.170664] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.490 [2024-07-13 05:55:15.204758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.749 [2024-07-13 05:55:15.233165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:26.995  Copying: 167/512 [MB] (167 MBps) Copying: 335/512 [MB] (168 MBps) Copying: 502/512 [MB] (166 MBps) Copying: 512/512 [MB] (average 167 MBps) 00:08:26.995 00:08:26.995 05:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:26.995 05:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:26.995 05:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:26.995 05:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:26.995 05:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:26.995 05:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:26.995 05:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:26.995 05:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:26.995 [2024-07-13 05:55:18.684033] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:26.995 [2024-07-13 05:55:18.684140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75904 ] 00:08:26.995 { 00:08:26.995 "subsystems": [ 00:08:26.995 { 00:08:26.995 "subsystem": "bdev", 00:08:26.995 "config": [ 00:08:26.995 { 00:08:26.995 "params": { 00:08:26.995 "block_size": 512, 00:08:26.995 "num_blocks": 1048576, 00:08:26.995 "name": "malloc0" 00:08:26.995 }, 00:08:26.995 "method": "bdev_malloc_create" 00:08:26.995 }, 00:08:26.995 { 00:08:26.995 "params": { 00:08:26.995 "filename": "/dev/zram1", 00:08:26.995 "name": "uring0" 00:08:26.996 }, 00:08:26.996 "method": "bdev_uring_create" 00:08:26.996 }, 00:08:26.996 { 00:08:26.996 "params": { 00:08:26.996 "name": "uring0" 00:08:26.996 }, 00:08:26.996 "method": "bdev_uring_delete" 00:08:26.996 }, 00:08:26.996 { 00:08:26.996 "method": "bdev_wait_for_examine" 00:08:26.996 } 00:08:26.996 ] 00:08:26.996 } 00:08:26.996 ] 00:08:26.996 } 00:08:27.255 [2024-07-13 05:55:18.820213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.255 [2024-07-13 05:55:18.860033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.255 [2024-07-13 05:55:18.893758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:27.773  Copying: 0/0 [B] (average 0 Bps) 00:08:27.773 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.773 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:27.773 [2024-07-13 05:55:19.351656] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:27.774 [2024-07-13 05:55:19.351773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75929 ] 00:08:27.774 { 00:08:27.774 "subsystems": [ 00:08:27.774 { 00:08:27.774 "subsystem": "bdev", 00:08:27.774 "config": [ 00:08:27.774 { 00:08:27.774 "params": { 00:08:27.774 "block_size": 512, 00:08:27.774 "num_blocks": 1048576, 00:08:27.774 "name": "malloc0" 00:08:27.774 }, 00:08:27.774 "method": "bdev_malloc_create" 00:08:27.774 }, 00:08:27.774 { 00:08:27.774 "params": { 00:08:27.774 "filename": "/dev/zram1", 00:08:27.774 "name": "uring0" 00:08:27.774 }, 00:08:27.774 "method": "bdev_uring_create" 00:08:27.774 }, 00:08:27.774 { 00:08:27.774 "params": { 00:08:27.774 "name": "uring0" 00:08:27.774 }, 00:08:27.774 "method": "bdev_uring_delete" 00:08:27.774 }, 00:08:27.774 { 00:08:27.774 "method": "bdev_wait_for_examine" 00:08:27.774 } 00:08:27.774 ] 00:08:27.774 } 00:08:27.774 ] 00:08:27.774 } 00:08:28.033 [2024-07-13 05:55:19.508025] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.033 [2024-07-13 05:55:19.544840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.033 [2024-07-13 05:55:19.585254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:28.033 [2024-07-13 05:55:19.712976] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:28.033 [2024-07-13 05:55:19.713027] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:28.033 [2024-07-13 05:55:19.713054] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:28.033 [2024-07-13 05:55:19.713064] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.292 [2024-07-13 05:55:19.894973] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:28.292 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:08:28.292 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:28.292 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:08:28.292 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:08:28.292 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:08:28.292 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:28.292 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:28.292 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:08:28.292 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:28.292 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:08:28.292 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:08:28.292 05:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:28.551 00:08:28.551 real 0m13.620s 00:08:28.551 user 0m9.257s 00:08:28.551 sys 0m12.026s 00:08:28.551 05:55:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.551 ************************************ 00:08:28.551 END TEST dd_uring_copy 00:08:28.551 ************************************ 00:08:28.551 05:55:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:28.810 05:55:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:08:28.810 00:08:28.810 real 0m13.753s 00:08:28.810 user 0m9.305s 00:08:28.810 sys 0m12.113s 00:08:28.810 05:55:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.810 ************************************ 00:08:28.810 END TEST spdk_dd_uring 00:08:28.810 ************************************ 00:08:28.810 05:55:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:28.810 05:55:20 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:28.810 05:55:20 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:28.810 05:55:20 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:28.810 05:55:20 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.810 05:55:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:28.810 ************************************ 00:08:28.810 START TEST spdk_dd_sparse 00:08:28.810 ************************************ 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:28.810 * Looking for test storage... 00:08:28.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:28.810 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:28.810 1+0 records in 00:08:28.810 1+0 records out 00:08:28.811 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00653532 s, 642 MB/s 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:28.811 1+0 records in 00:08:28.811 1+0 records out 00:08:28.811 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00637883 s, 658 MB/s 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:28.811 1+0 records in 00:08:28.811 1+0 records out 00:08:28.811 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00437991 s, 958 MB/s 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:28.811 ************************************ 00:08:28.811 START TEST dd_sparse_file_to_file 00:08:28.811 ************************************ 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:28.811 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:28.811 [2024-07-13 05:55:20.513869] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:28.811 [2024-07-13 05:55:20.513965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76016 ] 00:08:28.811 { 00:08:28.811 "subsystems": [ 00:08:28.811 { 00:08:28.811 "subsystem": "bdev", 00:08:28.811 "config": [ 00:08:28.811 { 00:08:28.811 "params": { 00:08:28.811 "block_size": 4096, 00:08:28.811 "filename": "dd_sparse_aio_disk", 00:08:28.811 "name": "dd_aio" 00:08:28.811 }, 00:08:28.811 "method": "bdev_aio_create" 00:08:28.811 }, 00:08:28.811 { 00:08:28.811 "params": { 00:08:28.811 "lvs_name": "dd_lvstore", 00:08:28.811 "bdev_name": "dd_aio" 00:08:28.811 }, 00:08:28.811 "method": "bdev_lvol_create_lvstore" 00:08:28.811 }, 00:08:28.811 { 00:08:28.811 "method": "bdev_wait_for_examine" 00:08:28.811 } 00:08:28.811 ] 00:08:28.811 } 00:08:28.811 ] 00:08:28.811 } 00:08:29.069 [2024-07-13 05:55:20.651404] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.069 [2024-07-13 05:55:20.686333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.069 [2024-07-13 05:55:20.716310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.328  Copying: 12/36 [MB] (average 1090 MBps) 00:08:29.328 00:08:29.328 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:29.328 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:29.328 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:29.328 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:29.328 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:29.328 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:29.328 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:29.328 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:29.328 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:29.328 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:29.328 00:08:29.328 real 0m0.496s 00:08:29.328 user 0m0.297s 00:08:29.328 sys 0m0.229s 00:08:29.328 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.328 ************************************ 00:08:29.328 END TEST dd_sparse_file_to_file 00:08:29.328 ************************************ 00:08:29.328 05:55:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:29.328 05:55:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:29.328 05:55:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:29.329 05:55:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:29.329 05:55:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.329 05:55:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:29.329 ************************************ 00:08:29.329 START TEST dd_sparse_file_to_bdev 00:08:29.329 ************************************ 00:08:29.329 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:08:29.329 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:29.329 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:29.329 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:29.329 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:29.329 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:29.329 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:29.329 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:29.329 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:29.594 [2024-07-13 05:55:21.065168] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:29.594 [2024-07-13 05:55:21.065281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76058 ] 00:08:29.594 { 00:08:29.594 "subsystems": [ 00:08:29.594 { 00:08:29.594 "subsystem": "bdev", 00:08:29.594 "config": [ 00:08:29.594 { 00:08:29.594 "params": { 00:08:29.594 "block_size": 4096, 00:08:29.594 "filename": "dd_sparse_aio_disk", 00:08:29.594 "name": "dd_aio" 00:08:29.594 }, 00:08:29.594 "method": "bdev_aio_create" 00:08:29.594 }, 00:08:29.594 { 00:08:29.594 "params": { 00:08:29.594 "lvs_name": "dd_lvstore", 00:08:29.594 "lvol_name": "dd_lvol", 00:08:29.594 "size_in_mib": 36, 00:08:29.594 "thin_provision": true 00:08:29.595 }, 00:08:29.595 "method": "bdev_lvol_create" 00:08:29.595 }, 00:08:29.595 { 00:08:29.595 "method": "bdev_wait_for_examine" 00:08:29.595 } 00:08:29.595 ] 00:08:29.595 } 00:08:29.595 ] 00:08:29.595 } 00:08:29.595 [2024-07-13 05:55:21.211789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.595 [2024-07-13 05:55:21.246371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.595 [2024-07-13 05:55:21.278482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.865  Copying: 12/36 [MB] (average 521 MBps) 00:08:29.865 00:08:29.865 00:08:29.865 real 0m0.514s 00:08:29.865 user 0m0.335s 00:08:29.865 sys 0m0.232s 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:29.865 ************************************ 00:08:29.865 END TEST dd_sparse_file_to_bdev 00:08:29.865 ************************************ 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:29.865 ************************************ 00:08:29.865 START TEST dd_sparse_bdev_to_file 00:08:29.865 ************************************ 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:29.865 05:55:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:30.124 [2024-07-13 05:55:21.615816] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:30.124 [2024-07-13 05:55:21.615895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76096 ] 00:08:30.124 { 00:08:30.124 "subsystems": [ 00:08:30.124 { 00:08:30.124 "subsystem": "bdev", 00:08:30.124 "config": [ 00:08:30.124 { 00:08:30.124 "params": { 00:08:30.124 "block_size": 4096, 00:08:30.124 "filename": "dd_sparse_aio_disk", 00:08:30.124 "name": "dd_aio" 00:08:30.124 }, 00:08:30.124 "method": "bdev_aio_create" 00:08:30.124 }, 00:08:30.124 { 00:08:30.124 "method": "bdev_wait_for_examine" 00:08:30.124 } 00:08:30.124 ] 00:08:30.124 } 00:08:30.124 ] 00:08:30.124 } 00:08:30.124 [2024-07-13 05:55:21.744806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.124 [2024-07-13 05:55:21.780642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.124 [2024-07-13 05:55:21.810878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:30.384  Copying: 12/36 [MB] (average 1090 MBps) 00:08:30.384 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:30.384 00:08:30.384 real 0m0.473s 00:08:30.384 user 0m0.284s 00:08:30.384 sys 0m0.237s 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:30.384 ************************************ 00:08:30.384 END TEST dd_sparse_bdev_to_file 00:08:30.384 ************************************ 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:30.384 05:55:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:30.384 00:08:30.384 real 0m1.778s 00:08:30.384 user 0m1.008s 00:08:30.384 sys 0m0.891s 00:08:30.644 05:55:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.644 05:55:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:30.644 ************************************ 00:08:30.644 END TEST spdk_dd_sparse 00:08:30.644 ************************************ 00:08:30.644 05:55:22 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:30.644 05:55:22 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:30.644 05:55:22 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:30.644 05:55:22 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.644 05:55:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:30.644 ************************************ 00:08:30.644 START TEST spdk_dd_negative 00:08:30.644 ************************************ 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:30.644 * Looking for test storage... 00:08:30.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:30.644 ************************************ 00:08:30.644 START TEST dd_invalid_arguments 00:08:30.644 ************************************ 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:30.644 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:30.644 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:30.644 00:08:30.644 CPU options: 00:08:30.644 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:30.644 (like [0,1,10]) 00:08:30.644 --lcores lcore to CPU mapping list. The list is in the format: 00:08:30.644 [<,lcores[@CPUs]>...] 00:08:30.644 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:30.644 Within the group, '-' is used for range separator, 00:08:30.644 ',' is used for single number separator. 00:08:30.644 '( )' can be omitted for single element group, 00:08:30.644 '@' can be omitted if cpus and lcores have the same value 00:08:30.644 --disable-cpumask-locks Disable CPU core lock files. 00:08:30.644 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:30.644 pollers in the app support interrupt mode) 00:08:30.644 -p, --main-core main (primary) core for DPDK 00:08:30.644 00:08:30.644 Configuration options: 00:08:30.644 -c, --config, --json JSON config file 00:08:30.644 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:30.644 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:30.644 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:30.644 --rpcs-allowed comma-separated list of permitted RPCS 00:08:30.644 --json-ignore-init-errors don't exit on invalid config entry 00:08:30.644 00:08:30.644 Memory options: 00:08:30.644 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:30.644 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:30.644 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:30.644 -R, --huge-unlink unlink huge files after initialization 00:08:30.644 -n, --mem-channels number of memory channels used for DPDK 00:08:30.644 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:30.644 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:30.644 --no-huge run without using hugepages 00:08:30.644 -i, --shm-id shared memory ID (optional) 00:08:30.644 -g, --single-file-segments force creating just one hugetlbfs file 00:08:30.644 00:08:30.644 PCI options: 00:08:30.644 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:30.644 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:30.644 -u, --no-pci disable PCI access 00:08:30.644 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:30.644 00:08:30.644 Log options: 00:08:30.644 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:30.644 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:30.645 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:30.645 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:30.645 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:08:30.645 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:08:30.645 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:08:30.645 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:30.645 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:08:30.645 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:08:30.645 virtio_vfio_user, vmd) 00:08:30.645 --silence-noticelog disable notice level logging to stderr 00:08:30.645 00:08:30.645 Trace options: 00:08:30.645 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:30.645 setting 0 to disable trace (default 32768) 00:08:30.645 Tracepoints vary in size and can use more than one trace entry. 00:08:30.645 -e, --tpoint-group [:] 00:08:30.645 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:30.645 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:30.645 [2024-07-13 05:55:22.312180] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:30.645 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:08:30.645 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:30.645 a tracepoint group. First tpoint inside a group can be enabled by 00:08:30.645 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:30.645 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:30.645 in /include/spdk_internal/trace_defs.h 00:08:30.645 00:08:30.645 Other options: 00:08:30.645 -h, --help show this usage 00:08:30.645 -v, --version print SPDK version 00:08:30.645 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:30.645 --env-context Opaque context for use of the env implementation 00:08:30.645 00:08:30.645 Application specific: 00:08:30.645 [--------- DD Options ---------] 00:08:30.645 --if Input file. Must specify either --if or --ib. 00:08:30.645 --ib Input bdev. Must specifier either --if or --ib 00:08:30.645 --of Output file. Must specify either --of or --ob. 00:08:30.645 --ob Output bdev. Must specify either --of or --ob. 00:08:30.645 --iflag Input file flags. 00:08:30.645 --oflag Output file flags. 00:08:30.645 --bs I/O unit size (default: 4096) 00:08:30.645 --qd Queue depth (default: 2) 00:08:30.645 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:30.645 --skip Skip this many I/O units at start of input. (default: 0) 00:08:30.645 --seek Skip this many I/O units at start of output. (default: 0) 00:08:30.645 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:30.645 --sparse Enable hole skipping in input target 00:08:30.645 Available iflag and oflag values: 00:08:30.645 append - append mode 00:08:30.645 direct - use direct I/O for data 00:08:30.645 directory - fail unless a directory 00:08:30.645 dsync - use synchronized I/O for data 00:08:30.645 noatime - do not update access time 00:08:30.645 noctty - do not assign controlling terminal from file 00:08:30.645 nofollow - do not follow symlinks 00:08:30.645 nonblock - use non-blocking I/O 00:08:30.645 sync - use synchronized I/O for data and metadata 00:08:30.645 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:08:30.645 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:30.645 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:30.645 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:30.645 00:08:30.645 real 0m0.068s 00:08:30.645 user 0m0.040s 00:08:30.645 sys 0m0.027s 00:08:30.645 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.645 ************************************ 00:08:30.645 END TEST dd_invalid_arguments 00:08:30.645 ************************************ 00:08:30.645 05:55:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:30.645 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:30.645 05:55:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:30.645 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:30.645 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.645 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:30.905 ************************************ 00:08:30.905 START TEST dd_double_input 00:08:30.905 ************************************ 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:30.905 [2024-07-13 05:55:22.428782] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:30.905 00:08:30.905 real 0m0.070s 00:08:30.905 user 0m0.042s 00:08:30.905 sys 0m0.026s 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:30.905 ************************************ 00:08:30.905 END TEST dd_double_input 00:08:30.905 ************************************ 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:30.905 ************************************ 00:08:30.905 START TEST dd_double_output 00:08:30.905 ************************************ 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:30.905 [2024-07-13 05:55:22.550182] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:30.905 00:08:30.905 real 0m0.073s 00:08:30.905 user 0m0.040s 00:08:30.905 sys 0m0.031s 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:30.905 ************************************ 00:08:30.905 END TEST dd_double_output 00:08:30.905 ************************************ 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:30.905 05:55:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:30.906 ************************************ 00:08:30.906 START TEST dd_no_input 00:08:30.906 ************************************ 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:30.906 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:31.165 [2024-07-13 05:55:22.678884] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:31.165 00:08:31.165 real 0m0.076s 00:08:31.165 user 0m0.045s 00:08:31.165 sys 0m0.030s 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:31.165 ************************************ 00:08:31.165 END TEST dd_no_input 00:08:31.165 ************************************ 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.165 ************************************ 00:08:31.165 START TEST dd_no_output 00:08:31.165 ************************************ 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.165 [2024-07-13 05:55:22.801733] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:31.165 00:08:31.165 real 0m0.075s 00:08:31.165 user 0m0.047s 00:08:31.165 sys 0m0.026s 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:31.165 ************************************ 00:08:31.165 END TEST dd_no_output 00:08:31.165 ************************************ 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:31.165 05:55:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.166 ************************************ 00:08:31.166 START TEST dd_wrong_blocksize 00:08:31.166 ************************************ 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.166 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:31.424 [2024-07-13 05:55:22.923700] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:31.424 00:08:31.424 real 0m0.068s 00:08:31.424 user 0m0.049s 00:08:31.424 sys 0m0.018s 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.424 ************************************ 00:08:31.424 END TEST dd_wrong_blocksize 00:08:31.424 ************************************ 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.424 ************************************ 00:08:31.424 START TEST dd_smaller_blocksize 00:08:31.424 ************************************ 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.424 05:55:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:31.424 [2024-07-13 05:55:23.040479] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:31.424 [2024-07-13 05:55:23.040562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76309 ] 00:08:31.684 [2024-07-13 05:55:23.174563] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.684 [2024-07-13 05:55:23.216795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.684 [2024-07-13 05:55:23.249625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:31.684 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:31.684 [2024-07-13 05:55:23.266204] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:31.684 [2024-07-13 05:55:23.266236] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.684 [2024-07-13 05:55:23.332203] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:31.684 05:55:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:08:31.684 05:55:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:31.684 05:55:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:08:31.684 05:55:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:08:31.684 05:55:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:08:31.684 05:55:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:31.684 00:08:31.684 real 0m0.416s 00:08:31.684 user 0m0.216s 00:08:31.684 sys 0m0.096s 00:08:31.684 05:55:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.684 05:55:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:31.684 ************************************ 00:08:31.684 END TEST dd_smaller_blocksize 00:08:31.684 ************************************ 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.943 ************************************ 00:08:31.943 START TEST dd_invalid_count 00:08:31.943 ************************************ 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:31.943 [2024-07-13 05:55:23.513716] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:31.943 00:08:31.943 real 0m0.074s 00:08:31.943 user 0m0.049s 00:08:31.943 sys 0m0.024s 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.943 ************************************ 00:08:31.943 END TEST dd_invalid_count 00:08:31.943 ************************************ 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.943 ************************************ 00:08:31.943 START TEST dd_invalid_oflag 00:08:31.943 ************************************ 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:31.943 [2024-07-13 05:55:23.627285] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:31.943 00:08:31.943 real 0m0.060s 00:08:31.943 user 0m0.037s 00:08:31.943 sys 0m0.023s 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.943 ************************************ 00:08:31.943 END TEST dd_invalid_oflag 00:08:31.943 ************************************ 00:08:31.943 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.203 ************************************ 00:08:32.203 START TEST dd_invalid_iflag 00:08:32.203 ************************************ 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:32.203 [2024-07-13 05:55:23.757252] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:32.203 00:08:32.203 real 0m0.073s 00:08:32.203 user 0m0.047s 00:08:32.203 sys 0m0.025s 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.203 ************************************ 00:08:32.203 END TEST dd_invalid_iflag 00:08:32.203 ************************************ 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.203 ************************************ 00:08:32.203 START TEST dd_unknown_flag 00:08:32.203 ************************************ 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.203 05:55:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:32.203 [2024-07-13 05:55:23.892511] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:32.203 [2024-07-13 05:55:23.892617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76401 ] 00:08:32.462 [2024-07-13 05:55:24.033273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.462 [2024-07-13 05:55:24.076391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.462 [2024-07-13 05:55:24.109566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:32.462 [2024-07-13 05:55:24.125970] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:32.463 [2024-07-13 05:55:24.126046] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.463 [2024-07-13 05:55:24.126110] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:32.463 [2024-07-13 05:55:24.126126] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.463 [2024-07-13 05:55:24.126419] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:32.463 [2024-07-13 05:55:24.126441] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.463 [2024-07-13 05:55:24.126496] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:32.463 [2024-07-13 05:55:24.126509] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:32.722 [2024-07-13 05:55:24.191511] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:32.722 00:08:32.722 real 0m0.444s 00:08:32.722 user 0m0.235s 00:08:32.722 sys 0m0.111s 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.722 ************************************ 00:08:32.722 END TEST dd_unknown_flag 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:32.722 ************************************ 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.722 ************************************ 00:08:32.722 START TEST dd_invalid_json 00:08:32.722 ************************************ 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.722 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:32.722 [2024-07-13 05:55:24.381203] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:32.722 [2024-07-13 05:55:24.381292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76425 ] 00:08:32.981 [2024-07-13 05:55:24.521677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.981 [2024-07-13 05:55:24.563115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.981 [2024-07-13 05:55:24.563191] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:32.981 [2024-07-13 05:55:24.563210] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:32.981 [2024-07-13 05:55:24.563221] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.981 [2024-07-13 05:55:24.563263] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:32.981 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:08:32.981 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:32.981 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:08:32.981 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:08:32.981 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:08:32.981 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:32.981 00:08:32.981 real 0m0.315s 00:08:32.981 user 0m0.154s 00:08:32.981 sys 0m0.057s 00:08:32.981 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.981 05:55:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:32.981 ************************************ 00:08:32.981 END TEST dd_invalid_json 00:08:32.981 ************************************ 00:08:32.981 05:55:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:32.981 00:08:32.981 real 0m2.518s 00:08:32.981 user 0m1.255s 00:08:32.981 sys 0m0.919s 00:08:32.981 05:55:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.981 05:55:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.981 ************************************ 00:08:32.981 END TEST spdk_dd_negative 00:08:32.981 ************************************ 00:08:33.240 05:55:24 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:33.240 00:08:33.240 real 1m2.225s 00:08:33.240 user 0m39.725s 00:08:33.240 sys 0m27.176s 00:08:33.240 05:55:24 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.240 05:55:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:33.240 ************************************ 00:08:33.240 END TEST spdk_dd 00:08:33.240 ************************************ 00:08:33.240 05:55:24 -- common/autotest_common.sh@1142 -- # return 0 00:08:33.240 05:55:24 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:33.240 05:55:24 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:33.240 05:55:24 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:33.240 05:55:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:33.240 05:55:24 -- common/autotest_common.sh@10 -- # set +x 00:08:33.240 05:55:24 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:33.240 05:55:24 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:33.240 05:55:24 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:33.240 05:55:24 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:33.240 05:55:24 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:33.240 05:55:24 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:33.240 05:55:24 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:33.240 05:55:24 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:33.240 05:55:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.240 05:55:24 -- common/autotest_common.sh@10 -- # set +x 00:08:33.240 ************************************ 00:08:33.240 START TEST nvmf_tcp 00:08:33.240 ************************************ 00:08:33.240 05:55:24 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:33.240 * Looking for test storage... 00:08:33.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.240 05:55:24 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.240 05:55:24 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.240 05:55:24 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.240 05:55:24 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.240 05:55:24 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.240 05:55:24 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.240 05:55:24 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:33.240 05:55:24 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:33.240 05:55:24 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:33.240 05:55:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:33.240 05:55:24 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:33.240 05:55:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:33.240 05:55:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.240 05:55:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:33.240 ************************************ 00:08:33.240 START TEST nvmf_host_management 00:08:33.240 ************************************ 00:08:33.240 05:55:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:33.500 * Looking for test storage... 00:08:33.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:33.500 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.500 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:33.500 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.500 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:33.501 Cannot find device "nvmf_init_br" 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:33.501 Cannot find device "nvmf_tgt_br" 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.501 Cannot find device "nvmf_tgt_br2" 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:33.501 Cannot find device "nvmf_init_br" 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:33.501 Cannot find device "nvmf_tgt_br" 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:33.501 Cannot find device "nvmf_tgt_br2" 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:33.501 Cannot find device "nvmf_br" 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:33.501 Cannot find device "nvmf_init_if" 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:33.501 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:33.760 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:33.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:08:33.760 00:08:33.760 --- 10.0.0.2 ping statistics --- 00:08:33.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.761 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:33.761 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:33.761 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:08:33.761 00:08:33.761 --- 10.0.0.3 ping statistics --- 00:08:33.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.761 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:33.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:33.761 00:08:33.761 --- 10.0.0.1 ping statistics --- 00:08:33.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.761 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=76685 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 76685 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 76685 ']' 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:33.761 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.020 [2024-07-13 05:55:25.533321] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:34.020 [2024-07-13 05:55:25.533433] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.020 [2024-07-13 05:55:25.674058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.020 [2024-07-13 05:55:25.712734] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.020 [2024-07-13 05:55:25.712805] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.020 [2024-07-13 05:55:25.712831] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.020 [2024-07-13 05:55:25.712839] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.020 [2024-07-13 05:55:25.712845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.020 [2024-07-13 05:55:25.716407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.020 [2024-07-13 05:55:25.716608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.020 [2024-07-13 05:55:25.716870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:34.020 [2024-07-13 05:55:25.716877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.020 [2024-07-13 05:55:25.746517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.279 [2024-07-13 05:55:25.838627] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.279 Malloc0 00:08:34.279 [2024-07-13 05:55:25.914685] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=76732 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 76732 /var/tmp/bdevperf.sock 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 76732 ']' 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:34.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:34.279 { 00:08:34.279 "params": { 00:08:34.279 "name": "Nvme$subsystem", 00:08:34.279 "trtype": "$TEST_TRANSPORT", 00:08:34.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:34.279 "adrfam": "ipv4", 00:08:34.279 "trsvcid": "$NVMF_PORT", 00:08:34.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:34.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:34.279 "hdgst": ${hdgst:-false}, 00:08:34.279 "ddgst": ${ddgst:-false} 00:08:34.279 }, 00:08:34.279 "method": "bdev_nvme_attach_controller" 00:08:34.279 } 00:08:34.279 EOF 00:08:34.279 )") 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:34.279 05:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:34.279 "params": { 00:08:34.279 "name": "Nvme0", 00:08:34.279 "trtype": "tcp", 00:08:34.279 "traddr": "10.0.0.2", 00:08:34.280 "adrfam": "ipv4", 00:08:34.280 "trsvcid": "4420", 00:08:34.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:34.280 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:34.280 "hdgst": false, 00:08:34.280 "ddgst": false 00:08:34.280 }, 00:08:34.280 "method": "bdev_nvme_attach_controller" 00:08:34.280 }' 00:08:34.538 [2024-07-13 05:55:26.024266] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:34.538 [2024-07-13 05:55:26.024404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76732 ] 00:08:34.538 [2024-07-13 05:55:26.166553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.538 [2024-07-13 05:55:26.208818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.538 [2024-07-13 05:55:26.251074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:34.797 Running I/O for 10 seconds... 00:08:34.797 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:34.797 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:34.797 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:34.797 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.797 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.797 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.797 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.797 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:34.797 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:34.798 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:34.798 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:34.798 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:34.798 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:34.798 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:34.798 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:34.798 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:34.798 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.798 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.798 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.798 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:34.798 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:34.798 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:35.057 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:35.057 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:35.057 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:35.057 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:35.057 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.057 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.057 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.318 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:08:35.318 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:08:35.318 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:35.318 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:35.318 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:35.318 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:35.318 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.318 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.318 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.318 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:35.318 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.318 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.318 05:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.318 05:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:35.318 [2024-07-13 05:55:26.816559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.816980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.816991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.318 [2024-07-13 05:55:26.817428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.318 [2024-07-13 05:55:26.817438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.817971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.817992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.319 [2024-07-13 05:55:26.818008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.818026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e56f0 is same with the state(5) to be set 00:08:35.319 [2024-07-13 05:55:26.818083] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16e56f0 was disconnected and freed. reset controller. 00:08:35.319 [2024-07-13 05:55:26.818197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:35.319 [2024-07-13 05:55:26.818215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.818226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:35.319 [2024-07-13 05:55:26.818248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.818265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:35.319 [2024-07-13 05:55:26.818276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.818286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:35.319 [2024-07-13 05:55:26.818295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.319 [2024-07-13 05:55:26.818304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16afd80 is same with the state(5) to be set 00:08:35.319 [2024-07-13 05:55:26.819432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:35.319 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:35.319 00:08:35.319 Latency(us) 00:08:35.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.319 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:35.319 Job: Nvme0n1 ended in about 0.47 seconds with error 00:08:35.319 Verification LBA range: start 0x0 length 0x400 00:08:35.319 Nvme0n1 : 0.47 1364.58 85.29 136.46 0.00 41019.14 2204.39 44802.79 00:08:35.319 =================================================================================================================== 00:08:35.319 Total : 1364.58 85.29 136.46 0.00 41019.14 2204.39 44802.79 00:08:35.319 [2024-07-13 05:55:26.821414] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.319 [2024-07-13 05:55:26.821444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16afd80 (9): Bad file descriptor 00:08:35.319 [2024-07-13 05:55:26.826886] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:36.255 05:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 76732 00:08:36.255 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (76732) - No such process 00:08:36.255 05:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:36.255 05:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:36.255 05:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:36.255 05:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:36.255 05:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:36.255 05:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:36.255 05:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:36.255 05:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:36.255 { 00:08:36.255 "params": { 00:08:36.255 "name": "Nvme$subsystem", 00:08:36.255 "trtype": "$TEST_TRANSPORT", 00:08:36.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.255 "adrfam": "ipv4", 00:08:36.255 "trsvcid": "$NVMF_PORT", 00:08:36.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.255 "hdgst": ${hdgst:-false}, 00:08:36.255 "ddgst": ${ddgst:-false} 00:08:36.255 }, 00:08:36.255 "method": "bdev_nvme_attach_controller" 00:08:36.255 } 00:08:36.255 EOF 00:08:36.255 )") 00:08:36.255 05:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:36.255 05:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:36.255 05:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:36.255 05:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:36.255 "params": { 00:08:36.255 "name": "Nvme0", 00:08:36.255 "trtype": "tcp", 00:08:36.255 "traddr": "10.0.0.2", 00:08:36.255 "adrfam": "ipv4", 00:08:36.255 "trsvcid": "4420", 00:08:36.255 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:36.255 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:36.255 "hdgst": false, 00:08:36.255 "ddgst": false 00:08:36.255 }, 00:08:36.255 "method": "bdev_nvme_attach_controller" 00:08:36.255 }' 00:08:36.255 [2024-07-13 05:55:27.865944] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:36.255 [2024-07-13 05:55:27.866037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76772 ] 00:08:36.514 [2024-07-13 05:55:28.002964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.514 [2024-07-13 05:55:28.044460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.514 [2024-07-13 05:55:28.085612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:36.514 Running I/O for 1 seconds... 00:08:37.893 00:08:37.893 Latency(us) 00:08:37.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.893 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:37.893 Verification LBA range: start 0x0 length 0x400 00:08:37.893 Nvme0n1 : 1.02 1500.77 93.80 0.00 0.00 41691.94 4200.26 43134.60 00:08:37.893 =================================================================================================================== 00:08:37.893 Total : 1500.77 93.80 0.00 0.00 41691.94 4200.26 43134.60 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:37.893 rmmod nvme_tcp 00:08:37.893 rmmod nvme_fabrics 00:08:37.893 rmmod nvme_keyring 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 76685 ']' 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 76685 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 76685 ']' 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 76685 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76685 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:37.893 killing process with pid 76685 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76685' 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 76685 00:08:37.893 05:55:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 76685 00:08:38.152 [2024-07-13 05:55:29.678636] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:38.152 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:38.152 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:38.152 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:38.152 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.152 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.152 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.152 05:55:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.152 05:55:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.152 05:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:38.152 05:55:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:38.152 00:08:38.152 real 0m4.823s 00:08:38.152 user 0m17.945s 00:08:38.152 sys 0m1.291s 00:08:38.152 05:55:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.152 05:55:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.152 ************************************ 00:08:38.152 END TEST nvmf_host_management 00:08:38.152 ************************************ 00:08:38.152 05:55:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:38.152 05:55:29 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:38.152 05:55:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:38.152 05:55:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.152 05:55:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.152 ************************************ 00:08:38.152 START TEST nvmf_lvol 00:08:38.152 ************************************ 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:38.152 * Looking for test storage... 00:08:38.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:38.152 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.410 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.410 05:55:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.410 05:55:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.410 05:55:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:38.411 Cannot find device "nvmf_tgt_br" 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.411 Cannot find device "nvmf_tgt_br2" 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:38.411 Cannot find device "nvmf_tgt_br" 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:38.411 Cannot find device "nvmf_tgt_br2" 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:38.411 05:55:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:38.411 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:38.670 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:38.670 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:38.670 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:38.670 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:38.670 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:38.670 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:38.670 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:38.670 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:38.670 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:38.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:08:38.671 00:08:38.671 --- 10.0.0.2 ping statistics --- 00:08:38.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.671 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:38.671 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:38.671 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:08:38.671 00:08:38.671 --- 10.0.0.3 ping statistics --- 00:08:38.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.671 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:38.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:38.671 00:08:38.671 --- 10.0.0.1 ping statistics --- 00:08:38.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.671 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=76977 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 76977 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 76977 ']' 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.671 05:55:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:38.671 [2024-07-13 05:55:30.277468] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:38.671 [2024-07-13 05:55:30.277563] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.930 [2024-07-13 05:55:30.408108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:38.930 [2024-07-13 05:55:30.441798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.930 [2024-07-13 05:55:30.441863] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.930 [2024-07-13 05:55:30.441888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.930 [2024-07-13 05:55:30.441896] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.930 [2024-07-13 05:55:30.441902] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.930 [2024-07-13 05:55:30.442088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.930 [2024-07-13 05:55:30.444963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.930 [2024-07-13 05:55:30.445020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.930 [2024-07-13 05:55:30.473619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:39.497 05:55:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.497 05:55:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:39.497 05:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:39.497 05:55:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.497 05:55:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:39.756 05:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.756 05:55:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:40.014 [2024-07-13 05:55:31.493581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.014 05:55:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:40.273 05:55:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:40.273 05:55:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:40.532 05:55:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:40.532 05:55:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:40.814 05:55:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:41.073 05:55:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fd79237d-445c-4dae-b4fb-a67d12da6a1a 00:08:41.073 05:55:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fd79237d-445c-4dae-b4fb-a67d12da6a1a lvol 20 00:08:41.336 05:55:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ccc16081-d85e-4279-8c7f-8251fbde09ff 00:08:41.336 05:55:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:41.593 05:55:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ccc16081-d85e-4279-8c7f-8251fbde09ff 00:08:41.850 05:55:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:42.108 [2024-07-13 05:55:33.612379] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.108 05:55:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.366 05:55:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=77053 00:08:42.366 05:55:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:42.366 05:55:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:43.301 05:55:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot ccc16081-d85e-4279-8c7f-8251fbde09ff MY_SNAPSHOT 00:08:43.559 05:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=fafc504a-dd25-4f19-ba69-a616e8d98e54 00:08:43.559 05:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize ccc16081-d85e-4279-8c7f-8251fbde09ff 30 00:08:43.817 05:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone fafc504a-dd25-4f19-ba69-a616e8d98e54 MY_CLONE 00:08:44.075 05:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b5d9491b-2112-4bc0-add7-b8259563c343 00:08:44.075 05:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate b5d9491b-2112-4bc0-add7-b8259563c343 00:08:44.668 05:55:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 77053 00:08:52.778 Initializing NVMe Controllers 00:08:52.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:52.778 Controller IO queue size 128, less than required. 00:08:52.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:52.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:52.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:52.778 Initialization complete. Launching workers. 00:08:52.778 ======================================================== 00:08:52.778 Latency(us) 00:08:52.778 Device Information : IOPS MiB/s Average min max 00:08:52.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10829.60 42.30 11823.60 1970.63 47318.58 00:08:52.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10933.50 42.71 11708.49 2837.78 83968.56 00:08:52.778 ======================================================== 00:08:52.778 Total : 21763.10 85.01 11765.77 1970.63 83968.56 00:08:52.778 00:08:52.778 05:55:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:52.778 05:55:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ccc16081-d85e-4279-8c7f-8251fbde09ff 00:08:53.037 05:55:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd79237d-445c-4dae-b4fb-a67d12da6a1a 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.297 rmmod nvme_tcp 00:08:53.297 rmmod nvme_fabrics 00:08:53.297 rmmod nvme_keyring 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 76977 ']' 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 76977 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 76977 ']' 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 76977 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76977 00:08:53.297 killing process with pid 76977 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76977' 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 76977 00:08:53.297 05:55:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 76977 00:08:53.556 05:55:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:53.556 05:55:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:53.556 05:55:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:53.556 05:55:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.556 05:55:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.556 05:55:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.556 05:55:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.556 05:55:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.556 05:55:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:53.556 ************************************ 00:08:53.556 END TEST nvmf_lvol 00:08:53.556 ************************************ 00:08:53.556 00:08:53.556 real 0m15.367s 00:08:53.556 user 1m4.741s 00:08:53.556 sys 0m4.028s 00:08:53.556 05:55:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.556 05:55:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:53.556 05:55:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:53.556 05:55:45 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:53.556 05:55:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:53.556 05:55:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.556 05:55:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:53.556 ************************************ 00:08:53.556 START TEST nvmf_lvs_grow 00:08:53.556 ************************************ 00:08:53.556 05:55:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:53.556 * Looking for test storage... 00:08:53.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:53.816 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:53.817 Cannot find device "nvmf_tgt_br" 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.817 Cannot find device "nvmf_tgt_br2" 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:53.817 Cannot find device "nvmf_tgt_br" 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:53.817 Cannot find device "nvmf_tgt_br2" 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:53.817 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:54.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:08:54.076 00:08:54.076 --- 10.0.0.2 ping statistics --- 00:08:54.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.076 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:54.076 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:54.076 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:08:54.076 00:08:54.076 --- 10.0.0.3 ping statistics --- 00:08:54.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.076 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:54.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:54.076 00:08:54.076 --- 10.0.0.1 ping statistics --- 00:08:54.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.076 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=77381 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 77381 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 77381 ']' 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.076 05:55:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:54.076 [2024-07-13 05:55:45.749259] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:54.076 [2024-07-13 05:55:45.749348] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.336 [2024-07-13 05:55:45.884156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.336 [2024-07-13 05:55:45.915000] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.336 [2024-07-13 05:55:45.915051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.336 [2024-07-13 05:55:45.915061] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.336 [2024-07-13 05:55:45.915067] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.336 [2024-07-13 05:55:45.915073] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.336 [2024-07-13 05:55:45.915094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.336 [2024-07-13 05:55:45.940028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:54.903 05:55:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.903 05:55:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:54.903 05:55:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.903 05:55:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:54.903 05:55:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:55.162 05:55:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.162 05:55:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:55.162 [2024-07-13 05:55:46.885999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.421 05:55:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:55.421 05:55:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:55.421 05:55:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.421 05:55:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:55.421 ************************************ 00:08:55.421 START TEST lvs_grow_clean 00:08:55.421 ************************************ 00:08:55.421 05:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:55.421 05:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:55.421 05:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:55.421 05:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:55.421 05:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:55.421 05:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:55.421 05:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:55.421 05:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:55.421 05:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:55.421 05:55:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:55.680 05:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:55.680 05:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:55.939 05:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e187421a-fb93-4aa3-b5c0-7423d798ddb3 00:08:55.939 05:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e187421a-fb93-4aa3-b5c0-7423d798ddb3 00:08:55.939 05:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:55.939 05:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:55.939 05:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:55.939 05:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e187421a-fb93-4aa3-b5c0-7423d798ddb3 lvol 150 00:08:56.199 05:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fb63351d-d5cb-42b5-b3d4-0ac563bf072d 00:08:56.199 05:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:56.199 05:55:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:56.458 [2024-07-13 05:55:48.022202] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:56.458 [2024-07-13 05:55:48.022265] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:56.458 true 00:08:56.458 05:55:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:56.458 05:55:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e187421a-fb93-4aa3-b5c0-7423d798ddb3 00:08:56.717 05:55:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:56.717 05:55:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:56.975 05:55:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fb63351d-d5cb-42b5-b3d4-0ac563bf072d 00:08:57.233 05:55:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:57.492 [2024-07-13 05:55:48.974829] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.492 05:55:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:57.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:57.752 05:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=77464 00:08:57.752 05:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:57.752 05:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 77464 /var/tmp/bdevperf.sock 00:08:57.752 05:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:57.753 05:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 77464 ']' 00:08:57.753 05:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:57.753 05:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.753 05:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:57.753 05:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.753 05:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:57.753 [2024-07-13 05:55:49.308567] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:57.753 [2024-07-13 05:55:49.308662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77464 ] 00:08:57.753 [2024-07-13 05:55:49.449748] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.012 [2024-07-13 05:55:49.492351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.012 [2024-07-13 05:55:49.525262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:58.580 05:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.580 05:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:58.580 05:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:58.839 Nvme0n1 00:08:58.839 05:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:59.099 [ 00:08:59.099 { 00:08:59.099 "name": "Nvme0n1", 00:08:59.099 "aliases": [ 00:08:59.099 "fb63351d-d5cb-42b5-b3d4-0ac563bf072d" 00:08:59.099 ], 00:08:59.099 "product_name": "NVMe disk", 00:08:59.099 "block_size": 4096, 00:08:59.099 "num_blocks": 38912, 00:08:59.099 "uuid": "fb63351d-d5cb-42b5-b3d4-0ac563bf072d", 00:08:59.099 "assigned_rate_limits": { 00:08:59.099 "rw_ios_per_sec": 0, 00:08:59.099 "rw_mbytes_per_sec": 0, 00:08:59.099 "r_mbytes_per_sec": 0, 00:08:59.099 "w_mbytes_per_sec": 0 00:08:59.099 }, 00:08:59.099 "claimed": false, 00:08:59.099 "zoned": false, 00:08:59.099 "supported_io_types": { 00:08:59.099 "read": true, 00:08:59.099 "write": true, 00:08:59.099 "unmap": true, 00:08:59.099 "flush": true, 00:08:59.099 "reset": true, 00:08:59.099 "nvme_admin": true, 00:08:59.099 "nvme_io": true, 00:08:59.099 "nvme_io_md": false, 00:08:59.099 "write_zeroes": true, 00:08:59.099 "zcopy": false, 00:08:59.099 "get_zone_info": false, 00:08:59.099 "zone_management": false, 00:08:59.099 "zone_append": false, 00:08:59.099 "compare": true, 00:08:59.099 "compare_and_write": true, 00:08:59.099 "abort": true, 00:08:59.099 "seek_hole": false, 00:08:59.099 "seek_data": false, 00:08:59.099 "copy": true, 00:08:59.099 "nvme_iov_md": false 00:08:59.099 }, 00:08:59.099 "memory_domains": [ 00:08:59.099 { 00:08:59.099 "dma_device_id": "system", 00:08:59.099 "dma_device_type": 1 00:08:59.099 } 00:08:59.099 ], 00:08:59.099 "driver_specific": { 00:08:59.099 "nvme": [ 00:08:59.099 { 00:08:59.099 "trid": { 00:08:59.099 "trtype": "TCP", 00:08:59.099 "adrfam": "IPv4", 00:08:59.099 "traddr": "10.0.0.2", 00:08:59.099 "trsvcid": "4420", 00:08:59.099 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:59.099 }, 00:08:59.099 "ctrlr_data": { 00:08:59.099 "cntlid": 1, 00:08:59.099 "vendor_id": "0x8086", 00:08:59.099 "model_number": "SPDK bdev Controller", 00:08:59.099 "serial_number": "SPDK0", 00:08:59.099 "firmware_revision": "24.09", 00:08:59.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:59.099 "oacs": { 00:08:59.099 "security": 0, 00:08:59.099 "format": 0, 00:08:59.099 "firmware": 0, 00:08:59.099 "ns_manage": 0 00:08:59.099 }, 00:08:59.099 "multi_ctrlr": true, 00:08:59.099 "ana_reporting": false 00:08:59.099 }, 00:08:59.099 "vs": { 00:08:59.099 "nvme_version": "1.3" 00:08:59.099 }, 00:08:59.099 "ns_data": { 00:08:59.099 "id": 1, 00:08:59.099 "can_share": true 00:08:59.099 } 00:08:59.099 } 00:08:59.099 ], 00:08:59.099 "mp_policy": "active_passive" 00:08:59.099 } 00:08:59.099 } 00:08:59.099 ] 00:08:59.099 05:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=77482 00:08:59.099 05:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:59.099 05:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:59.359 Running I/O for 10 seconds... 00:09:00.296 Latency(us) 00:09:00.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.296 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:00.296 =================================================================================================================== 00:09:00.296 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:00.296 00:09:01.257 05:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e187421a-fb93-4aa3-b5c0-7423d798ddb3 00:09:01.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.257 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:01.257 =================================================================================================================== 00:09:01.257 Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:01.257 00:09:01.514 true 00:09:01.514 05:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:01.514 05:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e187421a-fb93-4aa3-b5c0-7423d798ddb3 00:09:01.772 05:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:01.772 05:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:01.772 05:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 77482 00:09:02.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.337 Nvme0n1 : 3.00 7069.67 27.62 0.00 0.00 0.00 0.00 0.00 00:09:02.337 =================================================================================================================== 00:09:02.337 Total : 7069.67 27.62 0.00 0.00 0.00 0.00 0.00 00:09:02.337 00:09:03.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.272 Nvme0n1 : 4.00 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:09:03.272 =================================================================================================================== 00:09:03.272 Total : 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:09:03.272 00:09:04.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.207 Nvme0n1 : 5.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:04.207 =================================================================================================================== 00:09:04.207 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:04.207 00:09:05.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.586 Nvme0n1 : 6.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:05.586 =================================================================================================================== 00:09:05.586 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:05.586 00:09:06.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.521 Nvme0n1 : 7.00 6966.86 27.21 0.00 0.00 0.00 0.00 0.00 00:09:06.521 =================================================================================================================== 00:09:06.521 Total : 6966.86 27.21 0.00 0.00 0.00 0.00 0.00 00:09:06.521 00:09:07.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.456 Nvme0n1 : 8.00 6969.12 27.22 0.00 0.00 0.00 0.00 0.00 00:09:07.456 =================================================================================================================== 00:09:07.456 Total : 6969.12 27.22 0.00 0.00 0.00 0.00 0.00 00:09:07.456 00:09:08.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.389 Nvme0n1 : 9.00 6956.78 27.17 0.00 0.00 0.00 0.00 0.00 00:09:08.389 =================================================================================================================== 00:09:08.389 Total : 6956.78 27.17 0.00 0.00 0.00 0.00 0.00 00:09:08.389 00:09:09.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.323 Nvme0n1 : 10.00 6946.90 27.14 0.00 0.00 0.00 0.00 0.00 00:09:09.323 =================================================================================================================== 00:09:09.323 Total : 6946.90 27.14 0.00 0.00 0.00 0.00 0.00 00:09:09.323 00:09:09.323 00:09:09.323 Latency(us) 00:09:09.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.323 Nvme0n1 : 10.02 6945.93 27.13 0.00 0.00 18422.25 15609.48 48615.80 00:09:09.323 =================================================================================================================== 00:09:09.323 Total : 6945.93 27.13 0.00 0.00 18422.25 15609.48 48615.80 00:09:09.323 0 00:09:09.323 05:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 77464 00:09:09.323 05:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 77464 ']' 00:09:09.323 05:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 77464 00:09:09.323 05:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:09:09.323 05:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:09.323 05:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77464 00:09:09.323 killing process with pid 77464 00:09:09.323 Received shutdown signal, test time was about 10.000000 seconds 00:09:09.323 00:09:09.323 Latency(us) 00:09:09.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.323 =================================================================================================================== 00:09:09.323 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:09.323 05:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:09.323 05:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:09.323 05:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77464' 00:09:09.323 05:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 77464 00:09:09.323 05:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 77464 00:09:09.580 05:56:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:09.838 05:56:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:10.096 05:56:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:10.096 05:56:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e187421a-fb93-4aa3-b5c0-7423d798ddb3 00:09:10.353 05:56:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:10.353 05:56:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:10.353 05:56:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:10.611 [2024-07-13 05:56:02.250643] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:10.611 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e187421a-fb93-4aa3-b5c0-7423d798ddb3 00:09:10.611 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:09:10.611 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e187421a-fb93-4aa3-b5c0-7423d798ddb3 00:09:10.611 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.611 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.611 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.611 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.611 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.611 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.611 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.611 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:10.611 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e187421a-fb93-4aa3-b5c0-7423d798ddb3 00:09:10.868 request: 00:09:10.868 { 00:09:10.868 "uuid": "e187421a-fb93-4aa3-b5c0-7423d798ddb3", 00:09:10.868 "method": "bdev_lvol_get_lvstores", 00:09:10.868 "req_id": 1 00:09:10.868 } 00:09:10.868 Got JSON-RPC error response 00:09:10.868 response: 00:09:10.868 { 00:09:10.868 "code": -19, 00:09:10.868 "message": "No such device" 00:09:10.868 } 00:09:10.868 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:09:10.868 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:10.868 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:10.868 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:10.868 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:11.126 aio_bdev 00:09:11.126 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fb63351d-d5cb-42b5-b3d4-0ac563bf072d 00:09:11.126 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=fb63351d-d5cb-42b5-b3d4-0ac563bf072d 00:09:11.126 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:11.126 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:09:11.126 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:11.126 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:11.126 05:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:11.384 05:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb63351d-d5cb-42b5-b3d4-0ac563bf072d -t 2000 00:09:11.642 [ 00:09:11.642 { 00:09:11.642 "name": "fb63351d-d5cb-42b5-b3d4-0ac563bf072d", 00:09:11.642 "aliases": [ 00:09:11.642 "lvs/lvol" 00:09:11.642 ], 00:09:11.642 "product_name": "Logical Volume", 00:09:11.642 "block_size": 4096, 00:09:11.642 "num_blocks": 38912, 00:09:11.642 "uuid": "fb63351d-d5cb-42b5-b3d4-0ac563bf072d", 00:09:11.642 "assigned_rate_limits": { 00:09:11.642 "rw_ios_per_sec": 0, 00:09:11.642 "rw_mbytes_per_sec": 0, 00:09:11.642 "r_mbytes_per_sec": 0, 00:09:11.642 "w_mbytes_per_sec": 0 00:09:11.642 }, 00:09:11.642 "claimed": false, 00:09:11.642 "zoned": false, 00:09:11.642 "supported_io_types": { 00:09:11.642 "read": true, 00:09:11.642 "write": true, 00:09:11.642 "unmap": true, 00:09:11.642 "flush": false, 00:09:11.642 "reset": true, 00:09:11.642 "nvme_admin": false, 00:09:11.642 "nvme_io": false, 00:09:11.642 "nvme_io_md": false, 00:09:11.642 "write_zeroes": true, 00:09:11.642 "zcopy": false, 00:09:11.642 "get_zone_info": false, 00:09:11.642 "zone_management": false, 00:09:11.642 "zone_append": false, 00:09:11.642 "compare": false, 00:09:11.642 "compare_and_write": false, 00:09:11.642 "abort": false, 00:09:11.642 "seek_hole": true, 00:09:11.642 "seek_data": true, 00:09:11.642 "copy": false, 00:09:11.642 "nvme_iov_md": false 00:09:11.642 }, 00:09:11.642 "driver_specific": { 00:09:11.642 "lvol": { 00:09:11.642 "lvol_store_uuid": "e187421a-fb93-4aa3-b5c0-7423d798ddb3", 00:09:11.642 "base_bdev": "aio_bdev", 00:09:11.642 "thin_provision": false, 00:09:11.642 "num_allocated_clusters": 38, 00:09:11.642 "snapshot": false, 00:09:11.642 "clone": false, 00:09:11.642 "esnap_clone": false 00:09:11.642 } 00:09:11.642 } 00:09:11.642 } 00:09:11.642 ] 00:09:11.642 05:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:09:11.642 05:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e187421a-fb93-4aa3-b5c0-7423d798ddb3 00:09:11.642 05:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:11.901 05:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:11.901 05:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e187421a-fb93-4aa3-b5c0-7423d798ddb3 00:09:11.901 05:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:12.159 05:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:12.159 05:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fb63351d-d5cb-42b5-b3d4-0ac563bf072d 00:09:12.417 05:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e187421a-fb93-4aa3-b5c0-7423d798ddb3 00:09:12.676 05:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:12.933 05:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:13.192 ************************************ 00:09:13.192 END TEST lvs_grow_clean 00:09:13.192 ************************************ 00:09:13.192 00:09:13.192 real 0m17.833s 00:09:13.192 user 0m16.848s 00:09:13.192 sys 0m2.401s 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.192 ************************************ 00:09:13.192 START TEST lvs_grow_dirty 00:09:13.192 ************************************ 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:13.192 05:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:13.451 05:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:13.451 05:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:13.710 05:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=010c0471-b93d-4d14-ba40-e0ba40e6d023 00:09:13.710 05:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 010c0471-b93d-4d14-ba40-e0ba40e6d023 00:09:13.710 05:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:13.968 05:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:13.968 05:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:13.968 05:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 010c0471-b93d-4d14-ba40-e0ba40e6d023 lvol 150 00:09:14.226 05:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4e95bd52-70ab-4900-be47-b6836abf1d5a 00:09:14.226 05:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:14.226 05:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:14.483 [2024-07-13 05:56:06.080086] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:14.484 [2024-07-13 05:56:06.080204] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:14.484 true 00:09:14.484 05:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 010c0471-b93d-4d14-ba40-e0ba40e6d023 00:09:14.484 05:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:14.743 05:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:14.743 05:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:15.002 05:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4e95bd52-70ab-4900-be47-b6836abf1d5a 00:09:15.261 05:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:15.545 [2024-07-13 05:56:07.032600] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.545 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:15.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:15.807 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:15.807 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=77727 00:09:15.807 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:15.807 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 77727 /var/tmp/bdevperf.sock 00:09:15.807 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 77727 ']' 00:09:15.808 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:15.808 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.808 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:15.808 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.808 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:15.808 [2024-07-13 05:56:07.293334] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:15.808 [2024-07-13 05:56:07.293466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77727 ] 00:09:15.808 [2024-07-13 05:56:07.425560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.808 [2024-07-13 05:56:07.459119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.808 [2024-07-13 05:56:07.486220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:15.808 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.808 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:15.808 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:16.375 Nvme0n1 00:09:16.375 05:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:16.375 [ 00:09:16.375 { 00:09:16.375 "name": "Nvme0n1", 00:09:16.375 "aliases": [ 00:09:16.375 "4e95bd52-70ab-4900-be47-b6836abf1d5a" 00:09:16.375 ], 00:09:16.375 "product_name": "NVMe disk", 00:09:16.375 "block_size": 4096, 00:09:16.375 "num_blocks": 38912, 00:09:16.375 "uuid": "4e95bd52-70ab-4900-be47-b6836abf1d5a", 00:09:16.375 "assigned_rate_limits": { 00:09:16.375 "rw_ios_per_sec": 0, 00:09:16.375 "rw_mbytes_per_sec": 0, 00:09:16.375 "r_mbytes_per_sec": 0, 00:09:16.375 "w_mbytes_per_sec": 0 00:09:16.375 }, 00:09:16.375 "claimed": false, 00:09:16.375 "zoned": false, 00:09:16.375 "supported_io_types": { 00:09:16.375 "read": true, 00:09:16.375 "write": true, 00:09:16.375 "unmap": true, 00:09:16.375 "flush": true, 00:09:16.375 "reset": true, 00:09:16.375 "nvme_admin": true, 00:09:16.375 "nvme_io": true, 00:09:16.375 "nvme_io_md": false, 00:09:16.375 "write_zeroes": true, 00:09:16.375 "zcopy": false, 00:09:16.375 "get_zone_info": false, 00:09:16.375 "zone_management": false, 00:09:16.375 "zone_append": false, 00:09:16.375 "compare": true, 00:09:16.375 "compare_and_write": true, 00:09:16.375 "abort": true, 00:09:16.375 "seek_hole": false, 00:09:16.375 "seek_data": false, 00:09:16.375 "copy": true, 00:09:16.375 "nvme_iov_md": false 00:09:16.375 }, 00:09:16.375 "memory_domains": [ 00:09:16.375 { 00:09:16.375 "dma_device_id": "system", 00:09:16.375 "dma_device_type": 1 00:09:16.375 } 00:09:16.375 ], 00:09:16.375 "driver_specific": { 00:09:16.375 "nvme": [ 00:09:16.375 { 00:09:16.375 "trid": { 00:09:16.375 "trtype": "TCP", 00:09:16.375 "adrfam": "IPv4", 00:09:16.375 "traddr": "10.0.0.2", 00:09:16.375 "trsvcid": "4420", 00:09:16.375 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:16.375 }, 00:09:16.375 "ctrlr_data": { 00:09:16.375 "cntlid": 1, 00:09:16.375 "vendor_id": "0x8086", 00:09:16.375 "model_number": "SPDK bdev Controller", 00:09:16.375 "serial_number": "SPDK0", 00:09:16.375 "firmware_revision": "24.09", 00:09:16.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:16.375 "oacs": { 00:09:16.375 "security": 0, 00:09:16.375 "format": 0, 00:09:16.375 "firmware": 0, 00:09:16.375 "ns_manage": 0 00:09:16.375 }, 00:09:16.375 "multi_ctrlr": true, 00:09:16.375 "ana_reporting": false 00:09:16.375 }, 00:09:16.375 "vs": { 00:09:16.375 "nvme_version": "1.3" 00:09:16.375 }, 00:09:16.375 "ns_data": { 00:09:16.375 "id": 1, 00:09:16.375 "can_share": true 00:09:16.375 } 00:09:16.375 } 00:09:16.375 ], 00:09:16.375 "mp_policy": "active_passive" 00:09:16.375 } 00:09:16.375 } 00:09:16.375 ] 00:09:16.375 05:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:16.375 05:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=77743 00:09:16.375 05:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:16.634 Running I/O for 10 seconds... 00:09:17.570 Latency(us) 00:09:17.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.570 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:17.570 =================================================================================================================== 00:09:17.570 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:17.570 00:09:18.506 05:56:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 010c0471-b93d-4d14-ba40-e0ba40e6d023 00:09:18.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.506 Nvme0n1 : 2.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:18.506 =================================================================================================================== 00:09:18.506 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:18.506 00:09:18.764 true 00:09:18.764 05:56:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 010c0471-b93d-4d14-ba40-e0ba40e6d023 00:09:18.764 05:56:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:19.022 05:56:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:19.022 05:56:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:19.022 05:56:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 77743 00:09:19.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.590 Nvme0n1 : 3.00 7196.67 28.11 0.00 0.00 0.00 0.00 0.00 00:09:19.590 =================================================================================================================== 00:09:19.590 Total : 7196.67 28.11 0.00 0.00 0.00 0.00 0.00 00:09:19.590 00:09:20.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.526 Nvme0n1 : 4.00 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:09:20.526 =================================================================================================================== 00:09:20.526 Total : 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:09:20.526 00:09:21.460 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.460 Nvme0n1 : 5.00 7137.40 27.88 0.00 0.00 0.00 0.00 0.00 00:09:21.460 =================================================================================================================== 00:09:21.460 Total : 7137.40 27.88 0.00 0.00 0.00 0.00 0.00 00:09:21.460 00:09:22.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.834 Nvme0n1 : 6.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:22.834 =================================================================================================================== 00:09:22.834 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:22.834 00:09:23.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.770 Nvme0n1 : 7.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:23.770 =================================================================================================================== 00:09:23.770 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:23.770 00:09:24.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.705 Nvme0n1 : 8.00 7096.12 27.72 0.00 0.00 0.00 0.00 0.00 00:09:24.705 =================================================================================================================== 00:09:24.705 Total : 7096.12 27.72 0.00 0.00 0.00 0.00 0.00 00:09:24.705 00:09:25.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.642 Nvme0n1 : 9.00 6918.89 27.03 0.00 0.00 0.00 0.00 0.00 00:09:25.642 =================================================================================================================== 00:09:25.642 Total : 6918.89 27.03 0.00 0.00 0.00 0.00 0.00 00:09:25.642 00:09:26.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.579 Nvme0n1 : 10.00 6900.10 26.95 0.00 0.00 0.00 0.00 0.00 00:09:26.580 =================================================================================================================== 00:09:26.580 Total : 6900.10 26.95 0.00 0.00 0.00 0.00 0.00 00:09:26.580 00:09:26.580 00:09:26.580 Latency(us) 00:09:26.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.580 Nvme0n1 : 10.01 6908.74 26.99 0.00 0.00 18522.05 4974.78 239265.98 00:09:26.580 =================================================================================================================== 00:09:26.580 Total : 6908.74 26.99 0.00 0.00 18522.05 4974.78 239265.98 00:09:26.580 0 00:09:26.580 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 77727 00:09:26.580 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 77727 ']' 00:09:26.580 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 77727 00:09:26.580 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:26.580 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:26.580 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77727 00:09:26.580 killing process with pid 77727 00:09:26.580 Received shutdown signal, test time was about 10.000000 seconds 00:09:26.580 00:09:26.580 Latency(us) 00:09:26.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.580 =================================================================================================================== 00:09:26.580 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:26.580 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:26.580 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:26.580 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77727' 00:09:26.580 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 77727 00:09:26.580 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 77727 00:09:26.839 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.098 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:27.356 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 010c0471-b93d-4d14-ba40-e0ba40e6d023 00:09:27.356 05:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 77381 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 77381 00:09:27.615 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 77381 Killed "${NVMF_APP[@]}" "$@" 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=77876 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 77876 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 77876 ']' 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.615 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.615 [2024-07-13 05:56:19.267739] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:27.615 [2024-07-13 05:56:19.268062] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.874 [2024-07-13 05:56:19.406696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.874 [2024-07-13 05:56:19.438114] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.874 [2024-07-13 05:56:19.438168] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.874 [2024-07-13 05:56:19.438195] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.874 [2024-07-13 05:56:19.438203] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.874 [2024-07-13 05:56:19.438209] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.874 [2024-07-13 05:56:19.438233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.874 [2024-07-13 05:56:19.464827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:27.874 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.874 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:27.874 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:27.874 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:27.874 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.874 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.874 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:28.132 [2024-07-13 05:56:19.753261] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:28.132 [2024-07-13 05:56:19.753638] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:28.132 [2024-07-13 05:56:19.753960] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:28.132 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:28.132 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4e95bd52-70ab-4900-be47-b6836abf1d5a 00:09:28.132 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=4e95bd52-70ab-4900-be47-b6836abf1d5a 00:09:28.132 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:28.132 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:28.132 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:28.132 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:28.132 05:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:28.390 05:56:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e95bd52-70ab-4900-be47-b6836abf1d5a -t 2000 00:09:28.648 [ 00:09:28.648 { 00:09:28.648 "name": "4e95bd52-70ab-4900-be47-b6836abf1d5a", 00:09:28.648 "aliases": [ 00:09:28.648 "lvs/lvol" 00:09:28.648 ], 00:09:28.648 "product_name": "Logical Volume", 00:09:28.648 "block_size": 4096, 00:09:28.648 "num_blocks": 38912, 00:09:28.648 "uuid": "4e95bd52-70ab-4900-be47-b6836abf1d5a", 00:09:28.648 "assigned_rate_limits": { 00:09:28.648 "rw_ios_per_sec": 0, 00:09:28.648 "rw_mbytes_per_sec": 0, 00:09:28.648 "r_mbytes_per_sec": 0, 00:09:28.648 "w_mbytes_per_sec": 0 00:09:28.648 }, 00:09:28.648 "claimed": false, 00:09:28.648 "zoned": false, 00:09:28.648 "supported_io_types": { 00:09:28.648 "read": true, 00:09:28.648 "write": true, 00:09:28.648 "unmap": true, 00:09:28.648 "flush": false, 00:09:28.648 "reset": true, 00:09:28.648 "nvme_admin": false, 00:09:28.648 "nvme_io": false, 00:09:28.648 "nvme_io_md": false, 00:09:28.648 "write_zeroes": true, 00:09:28.648 "zcopy": false, 00:09:28.648 "get_zone_info": false, 00:09:28.648 "zone_management": false, 00:09:28.648 "zone_append": false, 00:09:28.648 "compare": false, 00:09:28.648 "compare_and_write": false, 00:09:28.648 "abort": false, 00:09:28.648 "seek_hole": true, 00:09:28.648 "seek_data": true, 00:09:28.648 "copy": false, 00:09:28.648 "nvme_iov_md": false 00:09:28.648 }, 00:09:28.648 "driver_specific": { 00:09:28.648 "lvol": { 00:09:28.648 "lvol_store_uuid": "010c0471-b93d-4d14-ba40-e0ba40e6d023", 00:09:28.648 "base_bdev": "aio_bdev", 00:09:28.648 "thin_provision": false, 00:09:28.648 "num_allocated_clusters": 38, 00:09:28.648 "snapshot": false, 00:09:28.648 "clone": false, 00:09:28.648 "esnap_clone": false 00:09:28.648 } 00:09:28.648 } 00:09:28.648 } 00:09:28.648 ] 00:09:28.648 05:56:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:28.648 05:56:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 010c0471-b93d-4d14-ba40-e0ba40e6d023 00:09:28.648 05:56:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:28.907 05:56:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:28.907 05:56:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:28.907 05:56:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 010c0471-b93d-4d14-ba40-e0ba40e6d023 00:09:29.167 05:56:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:29.167 05:56:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:29.425 [2024-07-13 05:56:20.975345] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:29.425 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 010c0471-b93d-4d14-ba40-e0ba40e6d023 00:09:29.425 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:29.425 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 010c0471-b93d-4d14-ba40-e0ba40e6d023 00:09:29.425 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.425 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.425 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.425 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.425 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.425 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.425 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.425 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:29.425 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 010c0471-b93d-4d14-ba40-e0ba40e6d023 00:09:29.683 request: 00:09:29.683 { 00:09:29.683 "uuid": "010c0471-b93d-4d14-ba40-e0ba40e6d023", 00:09:29.683 "method": "bdev_lvol_get_lvstores", 00:09:29.683 "req_id": 1 00:09:29.683 } 00:09:29.683 Got JSON-RPC error response 00:09:29.683 response: 00:09:29.683 { 00:09:29.683 "code": -19, 00:09:29.683 "message": "No such device" 00:09:29.683 } 00:09:29.683 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:29.683 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:29.683 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:29.683 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:29.683 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.940 aio_bdev 00:09:29.940 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4e95bd52-70ab-4900-be47-b6836abf1d5a 00:09:29.940 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=4e95bd52-70ab-4900-be47-b6836abf1d5a 00:09:29.940 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:29.940 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:29.940 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:29.940 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:29.940 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:30.197 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e95bd52-70ab-4900-be47-b6836abf1d5a -t 2000 00:09:30.455 [ 00:09:30.455 { 00:09:30.455 "name": "4e95bd52-70ab-4900-be47-b6836abf1d5a", 00:09:30.455 "aliases": [ 00:09:30.455 "lvs/lvol" 00:09:30.455 ], 00:09:30.455 "product_name": "Logical Volume", 00:09:30.455 "block_size": 4096, 00:09:30.455 "num_blocks": 38912, 00:09:30.455 "uuid": "4e95bd52-70ab-4900-be47-b6836abf1d5a", 00:09:30.455 "assigned_rate_limits": { 00:09:30.455 "rw_ios_per_sec": 0, 00:09:30.455 "rw_mbytes_per_sec": 0, 00:09:30.455 "r_mbytes_per_sec": 0, 00:09:30.455 "w_mbytes_per_sec": 0 00:09:30.455 }, 00:09:30.455 "claimed": false, 00:09:30.455 "zoned": false, 00:09:30.455 "supported_io_types": { 00:09:30.455 "read": true, 00:09:30.455 "write": true, 00:09:30.455 "unmap": true, 00:09:30.455 "flush": false, 00:09:30.455 "reset": true, 00:09:30.455 "nvme_admin": false, 00:09:30.455 "nvme_io": false, 00:09:30.455 "nvme_io_md": false, 00:09:30.455 "write_zeroes": true, 00:09:30.455 "zcopy": false, 00:09:30.455 "get_zone_info": false, 00:09:30.455 "zone_management": false, 00:09:30.455 "zone_append": false, 00:09:30.455 "compare": false, 00:09:30.455 "compare_and_write": false, 00:09:30.455 "abort": false, 00:09:30.455 "seek_hole": true, 00:09:30.455 "seek_data": true, 00:09:30.455 "copy": false, 00:09:30.455 "nvme_iov_md": false 00:09:30.455 }, 00:09:30.455 "driver_specific": { 00:09:30.455 "lvol": { 00:09:30.455 "lvol_store_uuid": "010c0471-b93d-4d14-ba40-e0ba40e6d023", 00:09:30.455 "base_bdev": "aio_bdev", 00:09:30.455 "thin_provision": false, 00:09:30.455 "num_allocated_clusters": 38, 00:09:30.455 "snapshot": false, 00:09:30.455 "clone": false, 00:09:30.455 "esnap_clone": false 00:09:30.455 } 00:09:30.455 } 00:09:30.455 } 00:09:30.455 ] 00:09:30.455 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:30.455 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 010c0471-b93d-4d14-ba40-e0ba40e6d023 00:09:30.455 05:56:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:30.712 05:56:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:30.712 05:56:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 010c0471-b93d-4d14-ba40-e0ba40e6d023 00:09:30.712 05:56:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:30.970 05:56:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:30.970 05:56:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4e95bd52-70ab-4900-be47-b6836abf1d5a 00:09:31.228 05:56:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 010c0471-b93d-4d14-ba40-e0ba40e6d023 00:09:31.486 05:56:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:31.742 05:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:31.999 ************************************ 00:09:31.999 END TEST lvs_grow_dirty 00:09:31.999 ************************************ 00:09:31.999 00:09:31.999 real 0m18.733s 00:09:31.999 user 0m39.872s 00:09:31.999 sys 0m8.295s 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:31.999 nvmf_trace.0 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:31.999 05:56:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.257 rmmod nvme_tcp 00:09:32.257 rmmod nvme_fabrics 00:09:32.257 rmmod nvme_keyring 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 77876 ']' 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 77876 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 77876 ']' 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 77876 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77876 00:09:32.257 killing process with pid 77876 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77876' 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 77876 00:09:32.257 05:56:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 77876 00:09:32.516 05:56:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:32.516 05:56:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:32.516 05:56:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:32.516 05:56:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:32.516 05:56:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:32.516 05:56:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.516 05:56:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.516 05:56:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.516 05:56:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:32.516 ************************************ 00:09:32.517 END TEST nvmf_lvs_grow 00:09:32.517 ************************************ 00:09:32.517 00:09:32.517 real 0m38.882s 00:09:32.517 user 1m2.058s 00:09:32.517 sys 0m11.375s 00:09:32.517 05:56:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.517 05:56:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:32.517 05:56:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:32.517 05:56:24 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:32.517 05:56:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:32.517 05:56:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.517 05:56:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:32.517 ************************************ 00:09:32.517 START TEST nvmf_bdev_io_wait 00:09:32.517 ************************************ 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:32.517 * Looking for test storage... 00:09:32.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:32.517 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:32.775 Cannot find device "nvmf_tgt_br" 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:32.775 Cannot find device "nvmf_tgt_br2" 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:32.775 Cannot find device "nvmf_tgt_br" 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:32.775 Cannot find device "nvmf_tgt_br2" 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:32.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:32.775 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:32.776 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.776 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:32.776 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:32.776 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:32.776 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:32.776 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:32.776 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:32.776 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:32.776 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:32.776 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:33.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:09:33.034 00:09:33.034 --- 10.0.0.2 ping statistics --- 00:09:33.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.034 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:33.034 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:33.034 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:09:33.034 00:09:33.034 --- 10.0.0.3 ping statistics --- 00:09:33.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.034 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:33.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:33.034 00:09:33.034 --- 10.0.0.1 ping statistics --- 00:09:33.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.034 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.034 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:33.035 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:33.035 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:33.035 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:33.035 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:33.035 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.035 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=78175 00:09:33.035 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 78175 00:09:33.035 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 78175 ']' 00:09:33.035 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:33.035 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.035 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.035 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.035 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.035 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.035 [2024-07-13 05:56:24.702602] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:33.035 [2024-07-13 05:56:24.702713] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.294 [2024-07-13 05:56:24.843913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.294 [2024-07-13 05:56:24.878041] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.294 [2024-07-13 05:56:24.878108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.294 [2024-07-13 05:56:24.878135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.295 [2024-07-13 05:56:24.878143] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.295 [2024-07-13 05:56:24.878150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.295 [2024-07-13 05:56:24.878214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.295 [2024-07-13 05:56:24.878364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.295 [2024-07-13 05:56:24.879038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.295 [2024-07-13 05:56:24.879082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.295 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:33.295 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:33.295 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:33.295 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:33.295 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.295 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.295 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:33.295 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.295 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.295 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.295 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:33.295 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.295 05:56:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.295 [2024-07-13 05:56:25.015910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.553 [2024-07-13 05:56:25.030765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.553 Malloc0 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.553 [2024-07-13 05:56:25.087785] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=78208 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=78210 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:33.553 { 00:09:33.553 "params": { 00:09:33.553 "name": "Nvme$subsystem", 00:09:33.553 "trtype": "$TEST_TRANSPORT", 00:09:33.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.553 "adrfam": "ipv4", 00:09:33.553 "trsvcid": "$NVMF_PORT", 00:09:33.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.553 "hdgst": ${hdgst:-false}, 00:09:33.553 "ddgst": ${ddgst:-false} 00:09:33.553 }, 00:09:33.553 "method": "bdev_nvme_attach_controller" 00:09:33.553 } 00:09:33.553 EOF 00:09:33.553 )") 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=78212 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:33.553 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:33.553 { 00:09:33.553 "params": { 00:09:33.554 "name": "Nvme$subsystem", 00:09:33.554 "trtype": "$TEST_TRANSPORT", 00:09:33.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.554 "adrfam": "ipv4", 00:09:33.554 "trsvcid": "$NVMF_PORT", 00:09:33.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.554 "hdgst": ${hdgst:-false}, 00:09:33.554 "ddgst": ${ddgst:-false} 00:09:33.554 }, 00:09:33.554 "method": "bdev_nvme_attach_controller" 00:09:33.554 } 00:09:33.554 EOF 00:09:33.554 )") 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=78215 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:33.554 { 00:09:33.554 "params": { 00:09:33.554 "name": "Nvme$subsystem", 00:09:33.554 "trtype": "$TEST_TRANSPORT", 00:09:33.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.554 "adrfam": "ipv4", 00:09:33.554 "trsvcid": "$NVMF_PORT", 00:09:33.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.554 "hdgst": ${hdgst:-false}, 00:09:33.554 "ddgst": ${ddgst:-false} 00:09:33.554 }, 00:09:33.554 "method": "bdev_nvme_attach_controller" 00:09:33.554 } 00:09:33.554 EOF 00:09:33.554 )") 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:33.554 { 00:09:33.554 "params": { 00:09:33.554 "name": "Nvme$subsystem", 00:09:33.554 "trtype": "$TEST_TRANSPORT", 00:09:33.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.554 "adrfam": "ipv4", 00:09:33.554 "trsvcid": "$NVMF_PORT", 00:09:33.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.554 "hdgst": ${hdgst:-false}, 00:09:33.554 "ddgst": ${ddgst:-false} 00:09:33.554 }, 00:09:33.554 "method": "bdev_nvme_attach_controller" 00:09:33.554 } 00:09:33.554 EOF 00:09:33.554 )") 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:33.554 "params": { 00:09:33.554 "name": "Nvme1", 00:09:33.554 "trtype": "tcp", 00:09:33.554 "traddr": "10.0.0.2", 00:09:33.554 "adrfam": "ipv4", 00:09:33.554 "trsvcid": "4420", 00:09:33.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.554 "hdgst": false, 00:09:33.554 "ddgst": false 00:09:33.554 }, 00:09:33.554 "method": "bdev_nvme_attach_controller" 00:09:33.554 }' 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:33.554 "params": { 00:09:33.554 "name": "Nvme1", 00:09:33.554 "trtype": "tcp", 00:09:33.554 "traddr": "10.0.0.2", 00:09:33.554 "adrfam": "ipv4", 00:09:33.554 "trsvcid": "4420", 00:09:33.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.554 "hdgst": false, 00:09:33.554 "ddgst": false 00:09:33.554 }, 00:09:33.554 "method": "bdev_nvme_attach_controller" 00:09:33.554 }' 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:33.554 "params": { 00:09:33.554 "name": "Nvme1", 00:09:33.554 "trtype": "tcp", 00:09:33.554 "traddr": "10.0.0.2", 00:09:33.554 "adrfam": "ipv4", 00:09:33.554 "trsvcid": "4420", 00:09:33.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.554 "hdgst": false, 00:09:33.554 "ddgst": false 00:09:33.554 }, 00:09:33.554 "method": "bdev_nvme_attach_controller" 00:09:33.554 }' 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:33.554 "params": { 00:09:33.554 "name": "Nvme1", 00:09:33.554 "trtype": "tcp", 00:09:33.554 "traddr": "10.0.0.2", 00:09:33.554 "adrfam": "ipv4", 00:09:33.554 "trsvcid": "4420", 00:09:33.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.554 "hdgst": false, 00:09:33.554 "ddgst": false 00:09:33.554 }, 00:09:33.554 "method": "bdev_nvme_attach_controller" 00:09:33.554 }' 00:09:33.554 [2024-07-13 05:56:25.152820] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:33.554 [2024-07-13 05:56:25.152907] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:33.554 [2024-07-13 05:56:25.164821] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:33.554 [2024-07-13 05:56:25.164897] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:33.554 [2024-07-13 05:56:25.167400] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:33.554 [2024-07-13 05:56:25.167602] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:33.554 [2024-07-13 05:56:25.171147] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:33.554 [2024-07-13 05:56:25.171221] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:33.554 05:56:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 78208 00:09:33.812 [2024-07-13 05:56:25.333676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.812 [2024-07-13 05:56:25.360200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:33.812 [2024-07-13 05:56:25.375566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.812 [2024-07-13 05:56:25.391210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:33.812 [2024-07-13 05:56:25.401700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:33.812 [2024-07-13 05:56:25.420770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.812 [2024-07-13 05:56:25.432475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:33.812 [2024-07-13 05:56:25.447583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:33.812 [2024-07-13 05:56:25.461623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.812 Running I/O for 1 seconds... 00:09:33.812 [2024-07-13 05:56:25.479404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:33.812 [2024-07-13 05:56:25.487770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:33.812 Running I/O for 1 seconds... 00:09:33.812 [2024-07-13 05:56:25.518984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:34.070 Running I/O for 1 seconds... 00:09:34.070 Running I/O for 1 seconds... 00:09:35.027 00:09:35.027 Latency(us) 00:09:35.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.027 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:35.027 Nvme1n1 : 1.00 165351.25 645.90 0.00 0.00 771.05 363.05 1094.75 00:09:35.027 =================================================================================================================== 00:09:35.027 Total : 165351.25 645.90 0.00 0.00 771.05 363.05 1094.75 00:09:35.027 00:09:35.027 Latency(us) 00:09:35.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.027 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:35.027 Nvme1n1 : 1.01 10496.89 41.00 0.00 0.00 12143.84 6940.86 20137.43 00:09:35.027 =================================================================================================================== 00:09:35.027 Total : 10496.89 41.00 0.00 0.00 12143.84 6940.86 20137.43 00:09:35.027 00:09:35.027 Latency(us) 00:09:35.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.027 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:35.027 Nvme1n1 : 1.01 7548.27 29.49 0.00 0.00 16858.34 10783.65 27405.96 00:09:35.027 =================================================================================================================== 00:09:35.027 Total : 7548.27 29.49 0.00 0.00 16858.34 10783.65 27405.96 00:09:35.027 00:09:35.027 Latency(us) 00:09:35.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.027 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:35.027 Nvme1n1 : 1.01 8590.49 33.56 0.00 0.00 14838.24 7626.01 25499.46 00:09:35.027 =================================================================================================================== 00:09:35.027 Total : 8590.49 33.56 0.00 0.00 14838.24 7626.01 25499.46 00:09:35.027 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 78210 00:09:35.027 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 78212 00:09:35.027 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 78215 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:35.287 rmmod nvme_tcp 00:09:35.287 rmmod nvme_fabrics 00:09:35.287 rmmod nvme_keyring 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 78175 ']' 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 78175 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 78175 ']' 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 78175 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78175 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:35.287 killing process with pid 78175 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78175' 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 78175 00:09:35.287 05:56:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 78175 00:09:35.547 05:56:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:35.547 05:56:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:35.547 05:56:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:35.547 05:56:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:35.547 05:56:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:35.547 05:56:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.547 05:56:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.547 05:56:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.547 05:56:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:35.547 00:09:35.547 real 0m2.916s 00:09:35.547 user 0m12.554s 00:09:35.547 sys 0m1.900s 00:09:35.547 05:56:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.547 05:56:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.547 ************************************ 00:09:35.547 END TEST nvmf_bdev_io_wait 00:09:35.547 ************************************ 00:09:35.547 05:56:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:35.547 05:56:27 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:35.547 05:56:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:35.547 05:56:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.547 05:56:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:35.547 ************************************ 00:09:35.547 START TEST nvmf_queue_depth 00:09:35.547 ************************************ 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:35.547 * Looking for test storage... 00:09:35.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:35.547 05:56:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:35.548 Cannot find device "nvmf_tgt_br" 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.548 Cannot find device "nvmf_tgt_br2" 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:35.548 Cannot find device "nvmf_tgt_br" 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:09:35.548 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:35.548 Cannot find device "nvmf_tgt_br2" 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:35.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:09:35.807 00:09:35.807 --- 10.0.0.2 ping statistics --- 00:09:35.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.807 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:35.807 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:35.807 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:09:35.807 00:09:35.807 --- 10.0.0.3 ping statistics --- 00:09:35.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.807 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:35.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:35.807 00:09:35.807 --- 10.0.0.1 ping statistics --- 00:09:35.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.807 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:35.807 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:36.066 05:56:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:36.066 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:36.066 05:56:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:36.066 05:56:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.066 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=78414 00:09:36.066 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:36.066 05:56:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 78414 00:09:36.066 05:56:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 78414 ']' 00:09:36.066 05:56:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.066 05:56:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.066 05:56:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.066 05:56:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.066 05:56:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.066 [2024-07-13 05:56:27.596794] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:36.066 [2024-07-13 05:56:27.596908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.066 [2024-07-13 05:56:27.733918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.066 [2024-07-13 05:56:27.768068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.066 [2024-07-13 05:56:27.768138] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.066 [2024-07-13 05:56:27.768149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.066 [2024-07-13 05:56:27.768156] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.066 [2024-07-13 05:56:27.768163] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.066 [2024-07-13 05:56:27.768191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.325 [2024-07-13 05:56:27.799269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.894 [2024-07-13 05:56:28.519039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.894 Malloc0 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.894 [2024-07-13 05:56:28.572535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=78446 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 78446 /var/tmp/bdevperf.sock 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 78446 ']' 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:36.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.894 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:37.153 [2024-07-13 05:56:28.623237] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:37.153 [2024-07-13 05:56:28.623321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78446 ] 00:09:37.153 [2024-07-13 05:56:28.760163] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.153 [2024-07-13 05:56:28.801265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.153 [2024-07-13 05:56:28.834240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:37.153 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.153 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:37.153 05:56:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:37.153 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.153 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:37.412 NVMe0n1 00:09:37.412 05:56:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.412 05:56:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:37.412 Running I/O for 10 seconds... 00:09:49.622 00:09:49.622 Latency(us) 00:09:49.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.622 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:49.622 Verification LBA range: start 0x0 length 0x4000 00:09:49.622 NVMe0n1 : 10.09 8819.24 34.45 0.00 0.00 115551.25 24665.37 92465.34 00:09:49.622 =================================================================================================================== 00:09:49.622 Total : 8819.24 34.45 0.00 0.00 115551.25 24665.37 92465.34 00:09:49.622 0 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 78446 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 78446 ']' 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 78446 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78446 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:49.622 killing process with pid 78446 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78446' 00:09:49.622 Received shutdown signal, test time was about 10.000000 seconds 00:09:49.622 00:09:49.622 Latency(us) 00:09:49.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.622 =================================================================================================================== 00:09:49.622 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 78446 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 78446 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.622 rmmod nvme_tcp 00:09:49.622 rmmod nvme_fabrics 00:09:49.622 rmmod nvme_keyring 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 78414 ']' 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 78414 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 78414 ']' 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 78414 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78414 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78414' 00:09:49.622 killing process with pid 78414 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 78414 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 78414 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.622 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.623 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.623 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.623 05:56:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:49.623 00:09:49.623 real 0m12.517s 00:09:49.623 user 0m21.577s 00:09:49.623 sys 0m1.937s 00:09:49.623 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:49.623 05:56:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.623 ************************************ 00:09:49.623 END TEST nvmf_queue_depth 00:09:49.623 ************************************ 00:09:49.623 05:56:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:49.623 05:56:39 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:49.623 05:56:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:49.623 05:56:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.623 05:56:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.623 ************************************ 00:09:49.623 START TEST nvmf_target_multipath 00:09:49.623 ************************************ 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:49.623 * Looking for test storage... 00:09:49.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:49.623 Cannot find device "nvmf_tgt_br" 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:49.623 Cannot find device "nvmf_tgt_br2" 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:49.623 Cannot find device "nvmf_tgt_br" 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:49.623 Cannot find device "nvmf_tgt_br2" 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:49.623 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:49.623 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:49.623 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:49.624 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:49.624 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:49.624 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:49.624 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:49.624 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:49.624 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:49.624 05:56:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:49.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:09:49.624 00:09:49.624 --- 10.0.0.2 ping statistics --- 00:09:49.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.624 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:49.624 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:49.624 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:09:49.624 00:09:49.624 --- 10.0.0.3 ping statistics --- 00:09:49.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.624 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:49.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:49.624 00:09:49.624 --- 10.0.0.1 ping statistics --- 00:09:49.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.624 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=78760 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 78760 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 78760 ']' 00:09:49.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.624 05:56:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:49.624 [2024-07-13 05:56:40.182364] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:49.624 [2024-07-13 05:56:40.182492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.624 [2024-07-13 05:56:40.323675] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.624 [2024-07-13 05:56:40.369309] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.624 [2024-07-13 05:56:40.369391] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.624 [2024-07-13 05:56:40.369407] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.624 [2024-07-13 05:56:40.369417] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.624 [2024-07-13 05:56:40.369426] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.624 [2024-07-13 05:56:40.369824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.624 [2024-07-13 05:56:40.370183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.624 [2024-07-13 05:56:40.370412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.624 [2024-07-13 05:56:40.370442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.624 [2024-07-13 05:56:40.405008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:49.624 05:56:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.624 05:56:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:09:49.624 05:56:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:49.624 05:56:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:49.624 05:56:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:49.624 05:56:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.624 05:56:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:49.624 [2024-07-13 05:56:41.340435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.883 05:56:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:50.141 Malloc0 00:09:50.141 05:56:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:50.399 05:56:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:50.399 05:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.657 [2024-07-13 05:56:42.301411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.657 05:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:50.915 [2024-07-13 05:56:42.533622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:50.915 05:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:51.174 05:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:51.174 05:56:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:51.174 05:56:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:51.174 05:56:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:51.174 05:56:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:51.174 05:56:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=78845 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:53.700 05:56:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:53.700 [global] 00:09:53.700 thread=1 00:09:53.700 invalidate=1 00:09:53.700 rw=randrw 00:09:53.700 time_based=1 00:09:53.700 runtime=6 00:09:53.700 ioengine=libaio 00:09:53.700 direct=1 00:09:53.700 bs=4096 00:09:53.700 iodepth=128 00:09:53.700 norandommap=0 00:09:53.700 numjobs=1 00:09:53.700 00:09:53.700 verify_dump=1 00:09:53.700 verify_backlog=512 00:09:53.700 verify_state_save=0 00:09:53.700 do_verify=1 00:09:53.700 verify=crc32c-intel 00:09:53.700 [job0] 00:09:53.700 filename=/dev/nvme0n1 00:09:53.700 Could not set queue depth (nvme0n1) 00:09:53.700 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:53.700 fio-3.35 00:09:53.700 Starting 1 thread 00:09:54.266 05:56:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:54.524 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:54.782 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:54.782 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:54.782 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:54.782 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:54.782 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:54.782 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:54.782 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:54.782 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:54.782 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:54.782 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:54.782 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:54.782 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:54.782 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:55.040 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:55.298 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:55.298 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:55.298 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:55.298 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:55.298 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:55.298 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:55.298 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:55.298 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:55.298 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:55.298 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:55.298 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:55.298 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:55.298 05:56:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 78845 00:09:59.482 00:09:59.482 job0: (groupid=0, jobs=1): err= 0: pid=78866: Sat Jul 13 05:56:51 2024 00:09:59.482 read: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(246MiB/6007msec) 00:09:59.482 slat (usec): min=3, max=7717, avg=55.84, stdev=215.79 00:09:59.482 clat (usec): min=1943, max=19038, avg=8311.48, stdev=1400.83 00:09:59.482 lat (usec): min=1953, max=19048, avg=8367.32, stdev=1403.55 00:09:59.482 clat percentiles (usec): 00:09:59.482 | 1.00th=[ 4424], 5.00th=[ 6521], 10.00th=[ 7242], 20.00th=[ 7635], 00:09:59.482 | 30.00th=[ 7898], 40.00th=[ 8094], 50.00th=[ 8160], 60.00th=[ 8291], 00:09:59.482 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[11469], 00:09:59.482 | 99.00th=[12911], 99.50th=[13173], 99.90th=[17433], 99.95th=[18220], 00:09:59.482 | 99.99th=[19006] 00:09:59.482 bw ( KiB/s): min= 2616, max=28448, per=52.77%, avg=22090.00, stdev=8489.86, samples=12 00:09:59.482 iops : min= 654, max= 7112, avg=5522.50, stdev=2122.46, samples=12 00:09:59.482 write: IOPS=6414, BW=25.1MiB/s (26.3MB/s)(130MiB/5181msec); 0 zone resets 00:09:59.482 slat (usec): min=7, max=2490, avg=64.80, stdev=155.41 00:09:59.482 clat (usec): min=2323, max=13830, avg=7210.12, stdev=1217.08 00:09:59.482 lat (usec): min=2345, max=13854, avg=7274.92, stdev=1222.28 00:09:59.482 clat percentiles (usec): 00:09:59.482 | 1.00th=[ 3523], 5.00th=[ 4424], 10.00th=[ 5669], 20.00th=[ 6718], 00:09:59.482 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7570], 00:09:59.482 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8455], 00:09:59.482 | 99.00th=[11076], 99.50th=[11600], 99.90th=[12780], 99.95th=[13042], 00:09:59.482 | 99.99th=[13698] 00:09:59.482 bw ( KiB/s): min= 2624, max=28856, per=86.22%, avg=22122.67, stdev=8397.45, samples=12 00:09:59.482 iops : min= 656, max= 7214, avg=5530.83, stdev=2099.40, samples=12 00:09:59.482 lat (msec) : 2=0.01%, 4=1.30%, 10=93.45%, 20=5.25% 00:09:59.482 cpu : usr=4.83%, sys=22.91%, ctx=5610, majf=0, minf=133 00:09:59.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:59.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.482 issued rwts: total=62863,33233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.482 00:09:59.482 Run status group 0 (all jobs): 00:09:59.482 READ: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=246MiB (257MB), run=6007-6007msec 00:09:59.482 WRITE: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=130MiB (136MB), run=5181-5181msec 00:09:59.482 00:09:59.482 Disk stats (read/write): 00:09:59.482 nvme0n1: ios=62208/32369, merge=0/0, ticks=494142/218883, in_queue=713025, util=98.50% 00:09:59.482 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:59.750 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=78950 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:00.323 05:56:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:00.323 [global] 00:10:00.323 thread=1 00:10:00.323 invalidate=1 00:10:00.323 rw=randrw 00:10:00.323 time_based=1 00:10:00.323 runtime=6 00:10:00.323 ioengine=libaio 00:10:00.323 direct=1 00:10:00.323 bs=4096 00:10:00.323 iodepth=128 00:10:00.323 norandommap=0 00:10:00.323 numjobs=1 00:10:00.324 00:10:00.324 verify_dump=1 00:10:00.324 verify_backlog=512 00:10:00.324 verify_state_save=0 00:10:00.324 do_verify=1 00:10:00.324 verify=crc32c-intel 00:10:00.324 [job0] 00:10:00.324 filename=/dev/nvme0n1 00:10:00.324 Could not set queue depth (nvme0n1) 00:10:00.324 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.324 fio-3.35 00:10:00.324 Starting 1 thread 00:10:01.259 05:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:01.517 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:01.775 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:01.775 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:01.775 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:01.775 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:01.775 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:01.775 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:01.775 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:01.775 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:01.775 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:01.775 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:01.775 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:01.775 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:01.775 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:02.033 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:02.292 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:02.292 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:02.292 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:02.292 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:02.292 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:02.292 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:02.292 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:02.292 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:02.292 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:02.292 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:02.292 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:02.292 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:02.292 05:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 78950 00:10:06.480 00:10:06.480 job0: (groupid=0, jobs=1): err= 0: pid=78972: Sat Jul 13 05:56:58 2024 00:10:06.480 read: IOPS=11.4k, BW=44.6MiB/s (46.7MB/s)(268MiB/6007msec) 00:10:06.480 slat (usec): min=7, max=5975, avg=43.62, stdev=188.99 00:10:06.480 clat (usec): min=1035, max=14880, avg=7628.77, stdev=1856.60 00:10:06.480 lat (usec): min=1045, max=14890, avg=7672.39, stdev=1872.87 00:10:06.480 clat percentiles (usec): 00:10:06.480 | 1.00th=[ 3425], 5.00th=[ 4424], 10.00th=[ 5014], 20.00th=[ 5866], 00:10:06.480 | 30.00th=[ 6980], 40.00th=[ 7635], 50.00th=[ 8029], 60.00th=[ 8291], 00:10:06.480 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[10552], 00:10:06.480 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13698], 99.95th=[13829], 00:10:06.480 | 99.99th=[14484] 00:10:06.480 bw ( KiB/s): min=12584, max=39584, per=54.03%, avg=24653.33, stdev=7805.52, samples=12 00:10:06.480 iops : min= 3146, max= 9896, avg=6163.33, stdev=1951.38, samples=12 00:10:06.480 write: IOPS=6862, BW=26.8MiB/s (28.1MB/s)(145MiB/5391msec); 0 zone resets 00:10:06.480 slat (usec): min=14, max=1573, avg=54.85, stdev=137.21 00:10:06.480 clat (usec): min=1784, max=13866, avg=6485.02, stdev=1801.93 00:10:06.480 lat (usec): min=1810, max=14239, avg=6539.87, stdev=1817.32 00:10:06.480 clat percentiles (usec): 00:10:06.480 | 1.00th=[ 2802], 5.00th=[ 3392], 10.00th=[ 3818], 20.00th=[ 4424], 00:10:06.480 | 30.00th=[ 5211], 40.00th=[ 6718], 50.00th=[ 7177], 60.00th=[ 7439], 00:10:06.480 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8586], 00:10:06.480 | 99.00th=[10552], 99.50th=[11600], 99.90th=[12911], 99.95th=[13304], 00:10:06.480 | 99.99th=[13829] 00:10:06.480 bw ( KiB/s): min=13096, max=38720, per=89.69%, avg=24620.00, stdev=7539.53, samples=12 00:10:06.480 iops : min= 3274, max= 9680, avg=6155.00, stdev=1884.88, samples=12 00:10:06.480 lat (msec) : 2=0.04%, 4=6.28%, 10=89.75%, 20=3.93% 00:10:06.480 cpu : usr=5.71%, sys=23.29%, ctx=5993, majf=0, minf=96 00:10:06.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:06.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.480 issued rwts: total=68522,36994,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.480 00:10:06.480 Run status group 0 (all jobs): 00:10:06.480 READ: bw=44.6MiB/s (46.7MB/s), 44.6MiB/s-44.6MiB/s (46.7MB/s-46.7MB/s), io=268MiB (281MB), run=6007-6007msec 00:10:06.480 WRITE: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=145MiB (152MB), run=5391-5391msec 00:10:06.480 00:10:06.480 Disk stats (read/write): 00:10:06.480 nvme0n1: ios=67923/36137, merge=0/0, ticks=493766/217024, in_queue=710790, util=98.70% 00:10:06.480 05:56:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:06.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:06.480 05:56:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:06.480 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:06.480 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:06.480 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.480 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:06.480 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.480 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:06.480 05:56:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.739 05:56:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:06.739 05:56:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:06.739 05:56:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:06.739 05:56:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:06.739 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:06.739 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:06.739 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:06.739 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:06.739 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:06.739 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:06.739 rmmod nvme_tcp 00:10:06.997 rmmod nvme_fabrics 00:10:06.997 rmmod nvme_keyring 00:10:06.997 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.997 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:06.997 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:06.997 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 78760 ']' 00:10:06.997 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 78760 00:10:06.997 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 78760 ']' 00:10:06.997 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 78760 00:10:06.997 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:10:06.997 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.997 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78760 00:10:06.997 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:06.998 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:06.998 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78760' 00:10:06.998 killing process with pid 78760 00:10:06.998 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 78760 00:10:06.998 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 78760 00:10:06.998 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.998 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.998 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.998 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.998 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.998 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.998 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.998 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.256 05:56:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:07.256 00:10:07.256 real 0m19.056s 00:10:07.256 user 1m11.959s 00:10:07.256 sys 0m9.396s 00:10:07.256 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.256 05:56:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:07.256 ************************************ 00:10:07.256 END TEST nvmf_target_multipath 00:10:07.256 ************************************ 00:10:07.256 05:56:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:07.256 05:56:58 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:07.256 05:56:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:07.256 05:56:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.256 05:56:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:07.256 ************************************ 00:10:07.256 START TEST nvmf_zcopy 00:10:07.256 ************************************ 00:10:07.256 05:56:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:07.256 * Looking for test storage... 00:10:07.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:07.256 05:56:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:07.256 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:07.256 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.256 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:07.257 Cannot find device "nvmf_tgt_br" 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:07.257 Cannot find device "nvmf_tgt_br2" 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:07.257 Cannot find device "nvmf_tgt_br" 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:07.257 Cannot find device "nvmf_tgt_br2" 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:10:07.257 05:56:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:07.516 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:07.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:07.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:07.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:10:07.517 00:10:07.517 --- 10.0.0.2 ping statistics --- 00:10:07.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.517 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:07.517 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:07.517 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:10:07.517 00:10:07.517 --- 10.0.0.3 ping statistics --- 00:10:07.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.517 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:07.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:10:07.517 00:10:07.517 --- 10.0.0.1 ping statistics --- 00:10:07.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.517 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:07.517 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:07.776 05:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:07.776 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:07.776 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:07.776 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.776 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=79213 00:10:07.776 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 79213 00:10:07.776 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 79213 ']' 00:10:07.776 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.776 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:07.776 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:07.776 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.776 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:07.776 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.776 [2024-07-13 05:56:59.299196] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:07.776 [2024-07-13 05:56:59.299278] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.776 [2024-07-13 05:56:59.437999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.776 [2024-07-13 05:56:59.480626] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.776 [2024-07-13 05:56:59.480689] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.776 [2024-07-13 05:56:59.480702] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.776 [2024-07-13 05:56:59.480712] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.777 [2024-07-13 05:56:59.480720] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.777 [2024-07-13 05:56:59.480748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.117 [2024-07-13 05:56:59.515394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.117 [2024-07-13 05:56:59.610241] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.117 [2024-07-13 05:56:59.626361] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.117 malloc0 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:08.117 { 00:10:08.117 "params": { 00:10:08.117 "name": "Nvme$subsystem", 00:10:08.117 "trtype": "$TEST_TRANSPORT", 00:10:08.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.117 "adrfam": "ipv4", 00:10:08.117 "trsvcid": "$NVMF_PORT", 00:10:08.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.117 "hdgst": ${hdgst:-false}, 00:10:08.117 "ddgst": ${ddgst:-false} 00:10:08.117 }, 00:10:08.117 "method": "bdev_nvme_attach_controller" 00:10:08.117 } 00:10:08.117 EOF 00:10:08.117 )") 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:08.117 05:56:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:08.117 "params": { 00:10:08.117 "name": "Nvme1", 00:10:08.117 "trtype": "tcp", 00:10:08.117 "traddr": "10.0.0.2", 00:10:08.117 "adrfam": "ipv4", 00:10:08.117 "trsvcid": "4420", 00:10:08.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.117 "hdgst": false, 00:10:08.117 "ddgst": false 00:10:08.117 }, 00:10:08.117 "method": "bdev_nvme_attach_controller" 00:10:08.117 }' 00:10:08.117 [2024-07-13 05:56:59.710934] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:08.117 [2024-07-13 05:56:59.711019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79244 ] 00:10:08.395 [2024-07-13 05:56:59.849326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.395 [2024-07-13 05:56:59.888215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.395 [2024-07-13 05:56:59.929105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:08.395 Running I/O for 10 seconds... 00:10:18.360 00:10:18.360 Latency(us) 00:10:18.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.361 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:18.361 Verification LBA range: start 0x0 length 0x1000 00:10:18.361 Nvme1n1 : 10.02 5943.48 46.43 0.00 0.00 21469.11 2576.76 32887.16 00:10:18.361 =================================================================================================================== 00:10:18.361 Total : 5943.48 46.43 0.00 0.00 21469.11 2576.76 32887.16 00:10:18.620 05:57:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=79359 00:10:18.620 05:57:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:18.620 05:57:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.620 05:57:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:18.620 05:57:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:18.620 05:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:18.620 05:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:18.620 05:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:18.620 05:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:18.620 { 00:10:18.620 "params": { 00:10:18.620 "name": "Nvme$subsystem", 00:10:18.620 "trtype": "$TEST_TRANSPORT", 00:10:18.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.620 "adrfam": "ipv4", 00:10:18.620 "trsvcid": "$NVMF_PORT", 00:10:18.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.620 "hdgst": ${hdgst:-false}, 00:10:18.620 "ddgst": ${ddgst:-false} 00:10:18.620 }, 00:10:18.620 "method": "bdev_nvme_attach_controller" 00:10:18.620 } 00:10:18.620 EOF 00:10:18.620 )") 00:10:18.620 05:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:18.620 [2024-07-13 05:57:10.189548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-07-13 05:57:10.189591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 05:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:18.620 05:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:18.620 05:57:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:18.620 "params": { 00:10:18.620 "name": "Nvme1", 00:10:18.620 "trtype": "tcp", 00:10:18.620 "traddr": "10.0.0.2", 00:10:18.620 "adrfam": "ipv4", 00:10:18.620 "trsvcid": "4420", 00:10:18.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.620 "hdgst": false, 00:10:18.620 "ddgst": false 00:10:18.620 }, 00:10:18.620 "method": "bdev_nvme_attach_controller" 00:10:18.620 }' 00:10:18.620 [2024-07-13 05:57:10.201523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-07-13 05:57:10.201551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-07-13 05:57:10.213526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-07-13 05:57:10.213551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-07-13 05:57:10.225511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-07-13 05:57:10.225536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-07-13 05:57:10.237511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-07-13 05:57:10.237534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-07-13 05:57:10.237540] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:18.620 [2024-07-13 05:57:10.237607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79359 ] 00:10:18.620 [2024-07-13 05:57:10.249518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-07-13 05:57:10.249541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-07-13 05:57:10.261528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-07-13 05:57:10.261561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-07-13 05:57:10.273541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-07-13 05:57:10.273564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-07-13 05:57:10.285531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-07-13 05:57:10.285553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-07-13 05:57:10.297535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-07-13 05:57:10.297558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-07-13 05:57:10.309555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-07-13 05:57:10.309585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-07-13 05:57:10.321550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-07-13 05:57:10.321577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-07-13 05:57:10.333556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-07-13 05:57:10.333584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-07-13 05:57:10.345580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-07-13 05:57:10.345611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.879 [2024-07-13 05:57:10.357577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.879 [2024-07-13 05:57:10.357623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.879 [2024-07-13 05:57:10.369567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.879 [2024-07-13 05:57:10.369595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.879 [2024-07-13 05:57:10.375488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.879 [2024-07-13 05:57:10.381581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.879 [2024-07-13 05:57:10.381613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.879 [2024-07-13 05:57:10.393588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.879 [2024-07-13 05:57:10.393622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-07-13 05:57:10.405592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.405626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-07-13 05:57:10.410080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.880 [2024-07-13 05:57:10.417574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.417599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-07-13 05:57:10.429611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.429649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-07-13 05:57:10.441618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.441660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-07-13 05:57:10.446589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:18.880 [2024-07-13 05:57:10.453610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.453645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-07-13 05:57:10.465594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.465622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-07-13 05:57:10.477607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.477636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-07-13 05:57:10.489623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.489652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-07-13 05:57:10.501637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.501664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-07-13 05:57:10.513649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.513678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-07-13 05:57:10.525667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.525695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-07-13 05:57:10.537673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.537701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 Running I/O for 5 seconds... 00:10:18.880 [2024-07-13 05:57:10.555363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.555420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-07-13 05:57:10.572632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.572676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-07-13 05:57:10.589673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-07-13 05:57:10.589716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.607297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.607342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.621669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.621698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.636889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.636936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.652102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.652152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.668277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.668339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.679486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.679532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.694963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.695011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.711324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.711371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.727102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.727150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.743166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.743209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.759666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.759698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.777989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.778061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.794092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.794125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.810177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.810210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.826202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.826235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.835668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.835702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-07-13 05:57:10.852037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-07-13 05:57:10.852070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.397 [2024-07-13 05:57:10.867245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.397 [2024-07-13 05:57:10.867279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:10.876716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:10.876749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:10.893520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:10.893552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:10.910114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:10.910145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:10.926703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:10.926737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:10.943699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:10.943730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:10.960490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:10.960541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:10.977152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:10.977200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:10.996176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:10.996209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:11.011626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:11.011658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:11.031635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:11.031679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:11.047648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:11.047682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:11.057267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:11.057300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:11.074611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:11.074647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:11.091162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:11.091196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:11.109550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:11.109610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.398 [2024-07-13 05:57:11.124719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.398 [2024-07-13 05:57:11.124752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.656 [2024-07-13 05:57:11.134504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.656 [2024-07-13 05:57:11.134538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.656 [2024-07-13 05:57:11.150940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.656 [2024-07-13 05:57:11.150974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.656 [2024-07-13 05:57:11.167888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.656 [2024-07-13 05:57:11.167921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.656 [2024-07-13 05:57:11.184520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.656 [2024-07-13 05:57:11.184553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.657 [2024-07-13 05:57:11.201486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.657 [2024-07-13 05:57:11.201533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.657 [2024-07-13 05:57:11.218129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.657 [2024-07-13 05:57:11.218162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.657 [2024-07-13 05:57:11.235017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.657 [2024-07-13 05:57:11.235052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.657 [2024-07-13 05:57:11.250771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.657 [2024-07-13 05:57:11.250803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.657 [2024-07-13 05:57:11.259746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.657 [2024-07-13 05:57:11.259779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.657 [2024-07-13 05:57:11.276510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.657 [2024-07-13 05:57:11.276544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.657 [2024-07-13 05:57:11.293493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.657 [2024-07-13 05:57:11.293525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.657 [2024-07-13 05:57:11.310272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.657 [2024-07-13 05:57:11.310304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.657 [2024-07-13 05:57:11.327292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.657 [2024-07-13 05:57:11.327371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.657 [2024-07-13 05:57:11.344229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.657 [2024-07-13 05:57:11.344262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.657 [2024-07-13 05:57:11.361561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.657 [2024-07-13 05:57:11.361593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.657 [2024-07-13 05:57:11.377009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.657 [2024-07-13 05:57:11.377041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.915 [2024-07-13 05:57:11.394614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.915 [2024-07-13 05:57:11.394648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.411247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.411279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.428500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.428532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.445485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.445531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.462075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.462107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.478457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.478489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.497252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.497285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.513102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.513168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.529652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.529685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.546316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.546349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.562814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.562847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.580390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.580420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.595617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.595652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.605623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.605669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.621235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.621269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.916 [2024-07-13 05:57:11.637456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.916 [2024-07-13 05:57:11.637488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.655422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.655450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.670240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.670271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.679836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.679869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.696702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.696736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.713670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.713713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.731552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.731599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.747493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.747528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.763978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.764014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.782480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.782515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.797466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.797502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.807386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.807432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.823021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.823100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.840147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.840180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.857404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.857493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.874900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.874934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.890759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.890793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.175 [2024-07-13 05:57:11.900293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.175 [2024-07-13 05:57:11.900335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:11.916962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:11.916997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:11.933518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:11.933550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:11.949917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:11.949949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:11.966214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:11.966246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:11.983900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:11.983934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:12.000502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:12.000533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:12.019156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:12.019190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:12.034382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:12.034410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:12.051618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:12.051650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:12.067233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:12.067267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:12.084955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:12.084987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:12.100964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:12.100997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:12.118582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:12.118613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:12.135173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:12.135205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.435 [2024-07-13 05:57:12.151344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.435 [2024-07-13 05:57:12.151387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.169627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.169666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.184958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.184990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.204290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.204323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.220492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.220537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.237420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.237453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.255989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.256024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.271217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.271295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.281460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.281495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.296532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.296565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.314276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.314313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.329725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.329757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.339163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.339195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.354772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.354805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.371223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.371286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.389257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.389290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.695 [2024-07-13 05:57:12.405072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.695 [2024-07-13 05:57:12.405104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.423195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.423229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.438713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.438750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.454483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.454545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.465092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.465139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.480182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.480230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.497271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.497303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.512441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.512473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.528494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.528541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.546289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.546321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.562276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.562319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.582530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.582562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.598628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.598665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.615513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.615567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.632551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.632583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.649872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.649905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.953 [2024-07-13 05:57:12.665966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.953 [2024-07-13 05:57:12.665999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.681929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.681962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.691402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.691430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.706903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.706936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.724601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.724633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.739197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.739230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.755155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.755188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.772783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.772815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.787790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.787833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.797785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.797828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.812616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.812645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.827852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.827895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.837719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.837780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.853211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.853257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.868885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.868947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.886000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.886072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.902646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.902677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.918842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.918874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.212 [2024-07-13 05:57:12.936521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.212 [2024-07-13 05:57:12.936548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.470 [2024-07-13 05:57:12.951477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:12.951507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:12.967406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:12.967487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:12.984218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:12.984252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:13.000495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:13.000529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:13.016607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:13.016639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:13.033913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:13.033949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:13.049346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:13.049425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:13.064684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:13.064734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:13.082170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:13.082209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:13.098927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:13.098963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:13.115996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:13.116031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:13.133630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:13.133661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:13.148738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:13.148785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:13.158612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:13.158645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:13.173705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:13.173738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.471 [2024-07-13 05:57:13.190667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.471 [2024-07-13 05:57:13.190698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.729 [2024-07-13 05:57:13.206868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.729 [2024-07-13 05:57:13.206918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.729 [2024-07-13 05:57:13.225468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.729 [2024-07-13 05:57:13.225499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.729 [2024-07-13 05:57:13.239739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.729 [2024-07-13 05:57:13.239771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.729 [2024-07-13 05:57:13.255558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.730 [2024-07-13 05:57:13.255591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.730 [2024-07-13 05:57:13.274554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.730 [2024-07-13 05:57:13.274585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.730 [2024-07-13 05:57:13.289124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.730 [2024-07-13 05:57:13.289158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.730 [2024-07-13 05:57:13.305066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.730 [2024-07-13 05:57:13.305102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.730 [2024-07-13 05:57:13.322074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.730 [2024-07-13 05:57:13.322111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.730 [2024-07-13 05:57:13.338643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.730 [2024-07-13 05:57:13.338690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.730 [2024-07-13 05:57:13.355772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.730 [2024-07-13 05:57:13.355821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.730 [2024-07-13 05:57:13.371879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.730 [2024-07-13 05:57:13.371930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.730 [2024-07-13 05:57:13.389021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.730 [2024-07-13 05:57:13.389057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.730 [2024-07-13 05:57:13.405363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.730 [2024-07-13 05:57:13.405441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.730 [2024-07-13 05:57:13.423837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.730 [2024-07-13 05:57:13.423871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.730 [2024-07-13 05:57:13.439198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.730 [2024-07-13 05:57:13.439234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.988 [2024-07-13 05:57:13.457277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.988 [2024-07-13 05:57:13.457328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.988 [2024-07-13 05:57:13.473612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.988 [2024-07-13 05:57:13.473645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.988 [2024-07-13 05:57:13.490682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.988 [2024-07-13 05:57:13.490714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.988 [2024-07-13 05:57:13.507635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.988 [2024-07-13 05:57:13.507669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.988 [2024-07-13 05:57:13.523200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.988 [2024-07-13 05:57:13.523236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.989 [2024-07-13 05:57:13.532653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.989 [2024-07-13 05:57:13.532687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.989 [2024-07-13 05:57:13.548472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.989 [2024-07-13 05:57:13.548505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.989 [2024-07-13 05:57:13.565032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.989 [2024-07-13 05:57:13.565067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.989 [2024-07-13 05:57:13.582583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.989 [2024-07-13 05:57:13.582617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.989 [2024-07-13 05:57:13.597367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.989 [2024-07-13 05:57:13.597428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.989 [2024-07-13 05:57:13.614929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.989 [2024-07-13 05:57:13.614964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.989 [2024-07-13 05:57:13.629816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.989 [2024-07-13 05:57:13.629847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.989 [2024-07-13 05:57:13.645812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.989 [2024-07-13 05:57:13.645845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.989 [2024-07-13 05:57:13.661856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.989 [2024-07-13 05:57:13.661888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.989 [2024-07-13 05:57:13.671671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.989 [2024-07-13 05:57:13.671706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.989 [2024-07-13 05:57:13.687017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.989 [2024-07-13 05:57:13.687051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.989 [2024-07-13 05:57:13.697426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.989 [2024-07-13 05:57:13.697455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.989 [2024-07-13 05:57:13.711710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.989 [2024-07-13 05:57:13.711761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.728132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.728166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.743177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.743208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.760718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.760752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.775753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.775798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.785513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.785545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.796864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.796912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.813700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.813732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.830878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.830926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.846636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.846668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.866015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.866068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.881196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.881262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.892214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.892310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.908379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.908443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.923579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.923612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.933745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.933778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.948850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.948883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.248 [2024-07-13 05:57:13.964052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.248 [2024-07-13 05:57:13.964099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:13.979956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:13.979991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:13.998337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:13.998437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.013677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.013710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.032239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.032335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.047869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.047934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.057845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.057878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.073393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.073453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.090506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.090541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.106307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.106394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.115665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.115699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.132910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.132962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.149819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.149852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.166734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.166767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.182184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.182221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.191742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.191776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.207993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.208031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.508 [2024-07-13 05:57:14.224319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.508 [2024-07-13 05:57:14.224353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.241591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.241624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.258445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.258507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.274959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.274996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.291693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.291726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.309599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.309633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.323809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.323840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.339124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.339162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.348337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.348397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.364689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.364737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.375033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.375084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.391102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.391139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.407648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.407682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.424606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.424639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.441054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.441089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.458872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.458936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.474720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.474782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.768 [2024-07-13 05:57:14.490840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.768 [2024-07-13 05:57:14.490888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.508105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.508142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.525012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.525050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.542050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.542097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.558455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.558512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.575827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.575859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.591053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.591087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.600383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.600419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.616525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.616559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.631208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.631243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.647293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.647346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.663976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.664014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.679504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.679540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.689037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.689074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.701316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.701349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.716953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.716991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.726919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.726957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.741978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.742024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.027 [2024-07-13 05:57:14.752609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.027 [2024-07-13 05:57:14.752647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.286 [2024-07-13 05:57:14.767878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.286 [2024-07-13 05:57:14.767926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.286 [2024-07-13 05:57:14.782851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.286 [2024-07-13 05:57:14.782885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.286 [2024-07-13 05:57:14.798881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.286 [2024-07-13 05:57:14.798949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.286 [2024-07-13 05:57:14.815661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.286 [2024-07-13 05:57:14.815694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.286 [2024-07-13 05:57:14.832261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.286 [2024-07-13 05:57:14.832299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.286 [2024-07-13 05:57:14.850514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.286 [2024-07-13 05:57:14.850551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.286 [2024-07-13 05:57:14.865266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.286 [2024-07-13 05:57:14.865303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.286 [2024-07-13 05:57:14.881130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.286 [2024-07-13 05:57:14.881168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.286 [2024-07-13 05:57:14.898835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.286 [2024-07-13 05:57:14.898886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.286 [2024-07-13 05:57:14.912973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.286 [2024-07-13 05:57:14.913005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.286 [2024-07-13 05:57:14.930976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.286 [2024-07-13 05:57:14.931016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.286 [2024-07-13 05:57:14.945928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.286 [2024-07-13 05:57:14.945967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.286 [2024-07-13 05:57:14.956095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.286 [2024-07-13 05:57:14.956132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.286 [2024-07-13 05:57:14.967627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.287 [2024-07-13 05:57:14.967659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.287 [2024-07-13 05:57:14.983565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.287 [2024-07-13 05:57:14.983598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.287 [2024-07-13 05:57:15.000254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.287 [2024-07-13 05:57:15.000285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.016476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.016521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.033383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.033440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.049934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.049967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.067273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.067304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.083573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.083604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.101166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.101198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.116771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.116802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.133219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.133251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.150816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.150847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.167799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.167832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.184555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.184586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.202990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.203021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.217981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.218035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.232925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.232956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.244022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.244053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.546 [2024-07-13 05:57:15.260050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.546 [2024-07-13 05:57:15.260083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.276367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.276444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.293483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.293515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.309454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.309485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.326984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.327016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.344278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.344310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.360682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.360715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.377172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.377219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.394921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.394952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.410914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.410945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.427545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.427577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.443988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.444021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.461206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.461238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.478247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.478282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.495042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.495073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.511756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.511833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.805 [2024-07-13 05:57:15.527519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.805 [2024-07-13 05:57:15.527565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.065 [2024-07-13 05:57:15.544067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.065 [2024-07-13 05:57:15.544100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.065 00:10:24.065 Latency(us) 00:10:24.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.065 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:24.065 Nvme1n1 : 5.01 11625.33 90.82 0.00 0.00 10998.29 4110.89 20971.52 00:10:24.065 =================================================================================================================== 00:10:24.065 Total : 11625.33 90.82 0.00 0.00 10998.29 4110.89 20971.52 00:10:24.065 [2024-07-13 05:57:15.556018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.065 [2024-07-13 05:57:15.556049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.065 [2024-07-13 05:57:15.568028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.065 [2024-07-13 05:57:15.568064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.065 [2024-07-13 05:57:15.580049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.065 [2024-07-13 05:57:15.580086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.065 [2024-07-13 05:57:15.592049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.065 [2024-07-13 05:57:15.592087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.065 [2024-07-13 05:57:15.604079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.065 [2024-07-13 05:57:15.604121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.065 [2024-07-13 05:57:15.616053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.065 [2024-07-13 05:57:15.616090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.065 [2024-07-13 05:57:15.628048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.065 [2024-07-13 05:57:15.628085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.065 [2024-07-13 05:57:15.640053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.065 [2024-07-13 05:57:15.640089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.065 [2024-07-13 05:57:15.652048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.065 [2024-07-13 05:57:15.652082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.065 [2024-07-13 05:57:15.664059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.065 [2024-07-13 05:57:15.664094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.065 [2024-07-13 05:57:15.676030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.065 [2024-07-13 05:57:15.676055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.065 [2024-07-13 05:57:15.688033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.065 [2024-07-13 05:57:15.688057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.065 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (79359) - No such process 00:10:24.065 05:57:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 79359 00:10:24.065 05:57:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.065 05:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.065 05:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.065 05:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.065 05:57:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:24.065 05:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.065 05:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.065 delay0 00:10:24.065 05:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.065 05:57:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:24.065 05:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.065 05:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.065 05:57:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.065 05:57:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:24.329 [2024-07-13 05:57:15.867515] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:30.892 Initializing NVMe Controllers 00:10:30.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:30.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:30.892 Initialization complete. Launching workers. 00:10:30.892 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 93 00:10:30.892 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 380, failed to submit 33 00:10:30.892 success 262, unsuccess 118, failed 0 00:10:30.892 05:57:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:30.892 05:57:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:30.892 05:57:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:30.892 05:57:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:30.892 05:57:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:30.892 05:57:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:30.892 05:57:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:30.892 05:57:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:30.892 rmmod nvme_tcp 00:10:30.892 rmmod nvme_fabrics 00:10:30.892 rmmod nvme_keyring 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 79213 ']' 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 79213 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 79213 ']' 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 79213 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79213 00:10:30.892 killing process with pid 79213 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79213' 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 79213 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 79213 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:30.892 00:10:30.892 real 0m23.476s 00:10:30.892 user 0m38.907s 00:10:30.892 sys 0m6.499s 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.892 ************************************ 00:10:30.892 05:57:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.892 END TEST nvmf_zcopy 00:10:30.892 ************************************ 00:10:30.892 05:57:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:30.892 05:57:22 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:30.892 05:57:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:30.892 05:57:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.892 05:57:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:30.892 ************************************ 00:10:30.892 START TEST nvmf_nmic 00:10:30.892 ************************************ 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:30.892 * Looking for test storage... 00:10:30.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.892 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:30.893 Cannot find device "nvmf_tgt_br" 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:30.893 Cannot find device "nvmf_tgt_br2" 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:30.893 Cannot find device "nvmf_tgt_br" 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:30.893 Cannot find device "nvmf_tgt_br2" 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:30.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:30.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:30.893 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:31.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:10:31.152 00:10:31.152 --- 10.0.0.2 ping statistics --- 00:10:31.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.152 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:31.152 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:31.152 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:10:31.152 00:10:31.152 --- 10.0.0.3 ping statistics --- 00:10:31.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.152 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:31.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:10:31.152 00:10:31.152 --- 10.0.0.1 ping statistics --- 00:10:31.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.152 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=79676 00:10:31.152 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 79676 00:10:31.153 05:57:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.153 05:57:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 79676 ']' 00:10:31.153 05:57:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.153 05:57:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.153 05:57:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.153 05:57:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.153 05:57:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.153 [2024-07-13 05:57:22.824304] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:31.153 [2024-07-13 05:57:22.824372] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.410 [2024-07-13 05:57:22.958699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.410 [2024-07-13 05:57:22.994993] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.410 [2024-07-13 05:57:22.995264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.410 [2024-07-13 05:57:22.995427] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.410 [2024-07-13 05:57:22.995525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.410 [2024-07-13 05:57:22.995561] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.410 [2024-07-13 05:57:22.995688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.410 [2024-07-13 05:57:22.995781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.410 [2024-07-13 05:57:22.997313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.410 [2024-07-13 05:57:22.997343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.410 [2024-07-13 05:57:23.024633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:31.410 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:31.410 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:31.410 05:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:31.411 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:31.411 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.411 05:57:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.411 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:31.411 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.411 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.411 [2024-07-13 05:57:23.115232] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.411 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.411 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:31.411 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.411 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.669 Malloc0 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.669 [2024-07-13 05:57:23.176225] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:31.669 test case1: single bdev can't be used in multiple subsystems 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.669 [2024-07-13 05:57:23.200098] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:31.669 [2024-07-13 05:57:23.200132] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:31.669 [2024-07-13 05:57:23.200142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.669 request: 00:10:31.669 { 00:10:31.669 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:31.669 "namespace": { 00:10:31.669 "bdev_name": "Malloc0", 00:10:31.669 "no_auto_visible": false 00:10:31.669 }, 00:10:31.669 "method": "nvmf_subsystem_add_ns", 00:10:31.669 "req_id": 1 00:10:31.669 } 00:10:31.669 Got JSON-RPC error response 00:10:31.669 response: 00:10:31.669 { 00:10:31.669 "code": -32602, 00:10:31.669 "message": "Invalid parameters" 00:10:31.669 } 00:10:31.669 Adding namespace failed - expected result. 00:10:31.669 test case2: host connect to nvmf target in multiple paths 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.669 [2024-07-13 05:57:23.212207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:31.669 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:31.928 05:57:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:31.928 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:31.928 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.928 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:31.928 05:57:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:33.833 05:57:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:33.833 05:57:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:33.833 05:57:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.833 05:57:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:33.833 05:57:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.833 05:57:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:33.833 05:57:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:33.833 [global] 00:10:33.833 thread=1 00:10:33.833 invalidate=1 00:10:33.833 rw=write 00:10:33.833 time_based=1 00:10:33.833 runtime=1 00:10:33.833 ioengine=libaio 00:10:33.833 direct=1 00:10:33.833 bs=4096 00:10:33.833 iodepth=1 00:10:33.833 norandommap=0 00:10:33.833 numjobs=1 00:10:33.833 00:10:33.833 verify_dump=1 00:10:33.833 verify_backlog=512 00:10:33.833 verify_state_save=0 00:10:33.833 do_verify=1 00:10:33.833 verify=crc32c-intel 00:10:33.833 [job0] 00:10:33.833 filename=/dev/nvme0n1 00:10:33.833 Could not set queue depth (nvme0n1) 00:10:34.092 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.092 fio-3.35 00:10:34.092 Starting 1 thread 00:10:35.471 00:10:35.471 job0: (groupid=0, jobs=1): err= 0: pid=79760: Sat Jul 13 05:57:26 2024 00:10:35.471 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:35.471 slat (nsec): min=11670, max=89762, avg=14599.57, stdev=5104.25 00:10:35.471 clat (usec): min=124, max=328, avg=168.11, stdev=22.50 00:10:35.471 lat (usec): min=139, max=340, avg=182.71, stdev=23.48 00:10:35.471 clat percentiles (usec): 00:10:35.471 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:10:35.471 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 169], 00:10:35.471 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 210], 00:10:35.471 | 99.00th=[ 235], 99.50th=[ 251], 99.90th=[ 281], 99.95th=[ 289], 00:10:35.471 | 99.99th=[ 330] 00:10:35.471 write: IOPS=3494, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1001msec); 0 zone resets 00:10:35.471 slat (usec): min=16, max=101, avg=21.50, stdev= 6.70 00:10:35.472 clat (usec): min=74, max=194, avg=100.68, stdev=16.09 00:10:35.472 lat (usec): min=92, max=296, avg=122.18, stdev=18.44 00:10:35.472 clat percentiles (usec): 00:10:35.472 | 1.00th=[ 78], 5.00th=[ 82], 10.00th=[ 85], 20.00th=[ 88], 00:10:35.472 | 30.00th=[ 91], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 100], 00:10:35.472 | 70.00th=[ 105], 80.00th=[ 113], 90.00th=[ 125], 95.00th=[ 135], 00:10:35.472 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 186], 00:10:35.472 | 99.99th=[ 194] 00:10:35.472 bw ( KiB/s): min=13776, max=13776, per=98.55%, avg=13776.00, stdev= 0.00, samples=1 00:10:35.472 iops : min= 3444, max= 3444, avg=3444.00, stdev= 0.00, samples=1 00:10:35.472 lat (usec) : 100=32.40%, 250=67.35%, 500=0.24% 00:10:35.472 cpu : usr=2.50%, sys=9.30%, ctx=6570, majf=0, minf=2 00:10:35.472 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.472 issued rwts: total=3072,3498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.472 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.472 00:10:35.472 Run status group 0 (all jobs): 00:10:35.472 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:35.472 WRITE: bw=13.7MiB/s (14.3MB/s), 13.7MiB/s-13.7MiB/s (14.3MB/s-14.3MB/s), io=13.7MiB (14.3MB), run=1001-1001msec 00:10:35.472 00:10:35.472 Disk stats (read/write): 00:10:35.472 nvme0n1: ios=2871/3072, merge=0/0, ticks=518/367, in_queue=885, util=91.48% 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:35.472 rmmod nvme_tcp 00:10:35.472 rmmod nvme_fabrics 00:10:35.472 rmmod nvme_keyring 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 79676 ']' 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 79676 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 79676 ']' 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 79676 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79676 00:10:35.472 killing process with pid 79676 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79676' 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 79676 00:10:35.472 05:57:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 79676 00:10:35.472 05:57:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:35.472 05:57:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:35.472 05:57:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:35.472 05:57:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:35.472 05:57:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:35.472 05:57:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.472 05:57:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:35.472 05:57:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.472 05:57:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:35.472 00:10:35.472 real 0m4.862s 00:10:35.472 user 0m15.096s 00:10:35.472 sys 0m2.169s 00:10:35.472 05:57:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.472 ************************************ 00:10:35.472 END TEST nvmf_nmic 00:10:35.472 05:57:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.472 ************************************ 00:10:35.732 05:57:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:35.732 05:57:27 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:35.732 05:57:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:35.732 05:57:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.732 05:57:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:35.732 ************************************ 00:10:35.732 START TEST nvmf_fio_target 00:10:35.732 ************************************ 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:35.732 * Looking for test storage... 00:10:35.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:35.732 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:35.733 Cannot find device "nvmf_tgt_br" 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.733 Cannot find device "nvmf_tgt_br2" 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:35.733 Cannot find device "nvmf_tgt_br" 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:35.733 Cannot find device "nvmf_tgt_br2" 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:35.733 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.733 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:35.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:10:35.994 00:10:35.994 --- 10.0.0.2 ping statistics --- 00:10:35.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.994 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:35.994 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:35.994 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:10:35.994 00:10:35.994 --- 10.0.0.3 ping statistics --- 00:10:35.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.994 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:35.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:10:35.994 00:10:35.994 --- 10.0.0.1 ping statistics --- 00:10:35.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.994 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=79935 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 79935 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 79935 ']' 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:35.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:35.994 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.994 [2024-07-13 05:57:27.703831] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:35.994 [2024-07-13 05:57:27.703888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.287 [2024-07-13 05:57:27.839815] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.287 [2024-07-13 05:57:27.883544] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.287 [2024-07-13 05:57:27.883848] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.287 [2024-07-13 05:57:27.884018] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.287 [2024-07-13 05:57:27.884187] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.287 [2024-07-13 05:57:27.884230] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.287 [2024-07-13 05:57:27.884495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.287 [2024-07-13 05:57:27.884566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.287 [2024-07-13 05:57:27.884643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.287 [2024-07-13 05:57:27.884644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.287 [2024-07-13 05:57:27.921074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:36.287 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:36.287 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:36.287 05:57:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:36.287 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:36.287 05:57:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.572 05:57:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.572 05:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:36.573 [2024-07-13 05:57:28.275732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.835 05:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.094 05:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:37.094 05:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.353 05:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:37.353 05:57:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.611 05:57:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:37.611 05:57:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.869 05:57:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:37.869 05:57:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:38.127 05:57:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.385 05:57:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:38.386 05:57:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.644 05:57:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:38.644 05:57:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.903 05:57:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:38.903 05:57:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:39.162 05:57:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.420 05:57:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:39.420 05:57:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.678 05:57:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:39.678 05:57:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:39.938 05:57:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.938 [2024-07-13 05:57:31.621154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.938 05:57:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:40.196 05:57:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:40.454 05:57:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:40.712 05:57:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:40.712 05:57:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:40.712 05:57:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.712 05:57:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:40.712 05:57:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:40.712 05:57:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:42.618 05:57:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:42.618 05:57:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:42.618 05:57:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:42.618 05:57:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:42.618 05:57:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:42.618 05:57:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:42.618 05:57:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:42.618 [global] 00:10:42.618 thread=1 00:10:42.618 invalidate=1 00:10:42.618 rw=write 00:10:42.618 time_based=1 00:10:42.618 runtime=1 00:10:42.618 ioengine=libaio 00:10:42.618 direct=1 00:10:42.618 bs=4096 00:10:42.618 iodepth=1 00:10:42.618 norandommap=0 00:10:42.618 numjobs=1 00:10:42.618 00:10:42.618 verify_dump=1 00:10:42.618 verify_backlog=512 00:10:42.618 verify_state_save=0 00:10:42.618 do_verify=1 00:10:42.618 verify=crc32c-intel 00:10:42.618 [job0] 00:10:42.618 filename=/dev/nvme0n1 00:10:42.618 [job1] 00:10:42.618 filename=/dev/nvme0n2 00:10:42.618 [job2] 00:10:42.618 filename=/dev/nvme0n3 00:10:42.618 [job3] 00:10:42.618 filename=/dev/nvme0n4 00:10:42.875 Could not set queue depth (nvme0n1) 00:10:42.875 Could not set queue depth (nvme0n2) 00:10:42.875 Could not set queue depth (nvme0n3) 00:10:42.875 Could not set queue depth (nvme0n4) 00:10:42.875 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.875 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.875 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.875 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.875 fio-3.35 00:10:42.875 Starting 4 threads 00:10:44.249 00:10:44.249 job0: (groupid=0, jobs=1): err= 0: pid=80120: Sat Jul 13 05:57:35 2024 00:10:44.249 read: IOPS=2491, BW=9966KiB/s (10.2MB/s)(9976KiB/1001msec) 00:10:44.249 slat (nsec): min=11670, max=61037, avg=16364.78, stdev=3893.69 00:10:44.249 clat (usec): min=139, max=560, avg=220.44, stdev=73.11 00:10:44.249 lat (usec): min=154, max=576, avg=236.80, stdev=72.98 00:10:44.249 clat percentiles (usec): 00:10:44.249 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:10:44.249 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 196], 00:10:44.249 | 70.00th=[ 251], 80.00th=[ 281], 90.00th=[ 351], 95.00th=[ 371], 00:10:44.249 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 486], 99.95th=[ 506], 00:10:44.249 | 99.99th=[ 562] 00:10:44.249 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:44.249 slat (nsec): min=13743, max=84836, avg=23454.74, stdev=5775.03 00:10:44.249 clat (usec): min=93, max=805, avg=132.43, stdev=30.45 00:10:44.249 lat (usec): min=114, max=827, avg=155.88, stdev=31.72 00:10:44.249 clat percentiles (usec): 00:10:44.249 | 1.00th=[ 101], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 115], 00:10:44.249 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 129], 00:10:44.249 | 70.00th=[ 135], 80.00th=[ 143], 90.00th=[ 167], 95.00th=[ 186], 00:10:44.249 | 99.00th=[ 237], 99.50th=[ 253], 99.90th=[ 347], 99.95th=[ 490], 00:10:44.249 | 99.99th=[ 807] 00:10:44.249 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:44.249 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:44.249 lat (usec) : 100=0.40%, 250=84.43%, 500=15.12%, 750=0.04%, 1000=0.02% 00:10:44.249 cpu : usr=2.10%, sys=8.20%, ctx=5054, majf=0, minf=5 00:10:44.249 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.249 issued rwts: total=2494,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.249 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.249 job1: (groupid=0, jobs=1): err= 0: pid=80121: Sat Jul 13 05:57:35 2024 00:10:44.249 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:44.249 slat (nsec): min=11136, max=44032, avg=14411.26, stdev=3495.82 00:10:44.249 clat (usec): min=129, max=239, avg=165.44, stdev=14.74 00:10:44.249 lat (usec): min=142, max=252, avg=179.85, stdev=15.26 00:10:44.249 clat percentiles (usec): 00:10:44.249 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:10:44.249 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:10:44.249 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:10:44.249 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 229], 99.95th=[ 233], 00:10:44.249 | 99.99th=[ 239] 00:10:44.249 write: IOPS=3069, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:44.249 slat (usec): min=14, max=103, avg=21.61, stdev= 5.30 00:10:44.249 clat (usec): min=88, max=2014, avg=120.43, stdev=38.23 00:10:44.249 lat (usec): min=106, max=2035, avg=142.04, stdev=38.74 00:10:44.249 clat percentiles (usec): 00:10:44.249 | 1.00th=[ 94], 5.00th=[ 100], 10.00th=[ 104], 20.00th=[ 109], 00:10:44.249 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 122], 00:10:44.249 | 70.00th=[ 125], 80.00th=[ 129], 90.00th=[ 137], 95.00th=[ 145], 00:10:44.249 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 437], 99.95th=[ 490], 00:10:44.249 | 99.99th=[ 2008] 00:10:44.249 bw ( KiB/s): min=12296, max=12296, per=30.05%, avg=12296.00, stdev= 0.00, samples=1 00:10:44.249 iops : min= 3074, max= 3074, avg=3074.00, stdev= 0.00, samples=1 00:10:44.249 lat (usec) : 100=2.51%, 250=97.43%, 500=0.05% 00:10:44.249 lat (msec) : 4=0.02% 00:10:44.249 cpu : usr=2.30%, sys=8.80%, ctx=6145, majf=0, minf=11 00:10:44.249 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.249 issued rwts: total=3072,3073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.249 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.249 job2: (groupid=0, jobs=1): err= 0: pid=80122: Sat Jul 13 05:57:35 2024 00:10:44.249 read: IOPS=1722, BW=6889KiB/s (7054kB/s)(6896KiB/1001msec) 00:10:44.249 slat (nsec): min=8507, max=58722, avg=16368.54, stdev=5470.99 00:10:44.249 clat (usec): min=170, max=6258, avg=295.01, stdev=182.44 00:10:44.249 lat (usec): min=189, max=6273, avg=311.38, stdev=182.98 00:10:44.249 clat percentiles (usec): 00:10:44.249 | 1.00th=[ 225], 5.00th=[ 241], 10.00th=[ 251], 20.00th=[ 260], 00:10:44.249 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:10:44.249 | 70.00th=[ 289], 80.00th=[ 310], 90.00th=[ 347], 95.00th=[ 371], 00:10:44.249 | 99.00th=[ 490], 99.50th=[ 529], 99.90th=[ 4293], 99.95th=[ 6259], 00:10:44.250 | 99.99th=[ 6259] 00:10:44.250 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:44.250 slat (usec): min=12, max=104, avg=23.54, stdev= 6.24 00:10:44.250 clat (usec): min=118, max=3382, avg=198.86, stdev=93.54 00:10:44.250 lat (usec): min=140, max=3419, avg=222.40, stdev=95.03 00:10:44.250 clat percentiles (usec): 00:10:44.250 | 1.00th=[ 133], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 161], 00:10:44.250 | 30.00th=[ 174], 40.00th=[ 186], 50.00th=[ 196], 60.00th=[ 202], 00:10:44.250 | 70.00th=[ 208], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 255], 00:10:44.250 | 99.00th=[ 371], 99.50th=[ 383], 99.90th=[ 1172], 99.95th=[ 1958], 00:10:44.250 | 99.99th=[ 3392] 00:10:44.250 bw ( KiB/s): min= 8175, max= 8175, per=19.98%, avg=8175.00, stdev= 0.00, samples=1 00:10:44.250 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:44.250 lat (usec) : 250=55.70%, 500=43.82%, 750=0.27%, 1000=0.05% 00:10:44.250 lat (msec) : 2=0.08%, 4=0.03%, 10=0.05% 00:10:44.250 cpu : usr=1.90%, sys=6.00%, ctx=3776, majf=0, minf=10 00:10:44.250 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.250 issued rwts: total=1724,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.250 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.250 job3: (groupid=0, jobs=1): err= 0: pid=80123: Sat Jul 13 05:57:35 2024 00:10:44.250 read: IOPS=2063, BW=8256KiB/s (8454kB/s)(8264KiB/1001msec) 00:10:44.250 slat (nsec): min=11797, max=80385, avg=18234.67, stdev=7121.26 00:10:44.250 clat (usec): min=145, max=575, avg=227.23, stdev=59.48 00:10:44.250 lat (usec): min=158, max=597, avg=245.46, stdev=61.32 00:10:44.250 clat percentiles (usec): 00:10:44.250 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:10:44.250 | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 212], 60.00th=[ 258], 00:10:44.250 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 306], 00:10:44.250 | 99.00th=[ 416], 99.50th=[ 506], 99.90th=[ 537], 99.95th=[ 553], 00:10:44.250 | 99.99th=[ 578] 00:10:44.250 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:44.250 slat (nsec): min=13762, max=99593, avg=24866.09, stdev=9220.15 00:10:44.250 clat (usec): min=99, max=327, avg=163.68, stdev=42.09 00:10:44.250 lat (usec): min=118, max=393, avg=188.54, stdev=44.76 00:10:44.250 clat percentiles (usec): 00:10:44.250 | 1.00th=[ 104], 5.00th=[ 111], 10.00th=[ 115], 20.00th=[ 122], 00:10:44.250 | 30.00th=[ 129], 40.00th=[ 137], 50.00th=[ 151], 60.00th=[ 184], 00:10:44.250 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 231], 00:10:44.250 | 99.00th=[ 251], 99.50th=[ 260], 99.90th=[ 277], 99.95th=[ 293], 00:10:44.250 | 99.99th=[ 326] 00:10:44.250 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:44.250 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:44.250 lat (usec) : 100=0.02%, 250=80.31%, 500=19.39%, 750=0.28% 00:10:44.250 cpu : usr=2.40%, sys=7.80%, ctx=4633, majf=0, minf=9 00:10:44.250 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.250 issued rwts: total=2066,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.250 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.250 00:10:44.250 Run status group 0 (all jobs): 00:10:44.250 READ: bw=36.5MiB/s (38.3MB/s), 6889KiB/s-12.0MiB/s (7054kB/s-12.6MB/s), io=36.5MiB (38.3MB), run=1001-1001msec 00:10:44.250 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:44.250 00:10:44.250 Disk stats (read/write): 00:10:44.250 nvme0n1: ios=2097/2536, merge=0/0, ticks=456/364, in_queue=820, util=87.54% 00:10:44.250 nvme0n2: ios=2560/2666, merge=0/0, ticks=439/354, in_queue=793, util=87.06% 00:10:44.250 nvme0n3: ios=1523/1536, merge=0/0, ticks=458/333, in_queue=791, util=88.45% 00:10:44.250 nvme0n4: ios=1698/2048, merge=0/0, ticks=406/378, in_queue=784, util=89.64% 00:10:44.250 05:57:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:44.250 [global] 00:10:44.250 thread=1 00:10:44.250 invalidate=1 00:10:44.250 rw=randwrite 00:10:44.250 time_based=1 00:10:44.250 runtime=1 00:10:44.250 ioengine=libaio 00:10:44.250 direct=1 00:10:44.250 bs=4096 00:10:44.250 iodepth=1 00:10:44.250 norandommap=0 00:10:44.250 numjobs=1 00:10:44.250 00:10:44.250 verify_dump=1 00:10:44.250 verify_backlog=512 00:10:44.250 verify_state_save=0 00:10:44.250 do_verify=1 00:10:44.250 verify=crc32c-intel 00:10:44.250 [job0] 00:10:44.250 filename=/dev/nvme0n1 00:10:44.250 [job1] 00:10:44.250 filename=/dev/nvme0n2 00:10:44.250 [job2] 00:10:44.250 filename=/dev/nvme0n3 00:10:44.250 [job3] 00:10:44.250 filename=/dev/nvme0n4 00:10:44.250 Could not set queue depth (nvme0n1) 00:10:44.250 Could not set queue depth (nvme0n2) 00:10:44.250 Could not set queue depth (nvme0n3) 00:10:44.250 Could not set queue depth (nvme0n4) 00:10:44.250 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.250 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.250 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.250 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.250 fio-3.35 00:10:44.250 Starting 4 threads 00:10:45.625 00:10:45.625 job0: (groupid=0, jobs=1): err= 0: pid=80176: Sat Jul 13 05:57:37 2024 00:10:45.625 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:45.625 slat (nsec): min=10655, max=57605, avg=12997.32, stdev=3691.30 00:10:45.625 clat (usec): min=127, max=785, avg=162.25, stdev=22.53 00:10:45.625 lat (usec): min=139, max=802, avg=175.25, stdev=22.82 00:10:45.625 clat percentiles (usec): 00:10:45.625 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:10:45.625 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:10:45.625 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 194], 00:10:45.625 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 281], 99.95th=[ 676], 00:10:45.625 | 99.99th=[ 783] 00:10:45.625 write: IOPS=3174, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec); 0 zone resets 00:10:45.625 slat (nsec): min=12614, max=72584, avg=19501.10, stdev=5254.15 00:10:45.625 clat (usec): min=89, max=205, avg=122.61, stdev=15.47 00:10:45.625 lat (usec): min=105, max=271, avg=142.11, stdev=16.36 00:10:45.625 clat percentiles (usec): 00:10:45.625 | 1.00th=[ 96], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 111], 00:10:45.625 | 30.00th=[ 114], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 124], 00:10:45.625 | 70.00th=[ 129], 80.00th=[ 135], 90.00th=[ 145], 95.00th=[ 153], 00:10:45.625 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 190], 99.95th=[ 204], 00:10:45.625 | 99.99th=[ 206] 00:10:45.625 bw ( KiB/s): min=12744, max=12744, per=26.12%, avg=12744.00, stdev= 0.00, samples=1 00:10:45.625 iops : min= 3186, max= 3186, avg=3186.00, stdev= 0.00, samples=1 00:10:45.625 lat (usec) : 100=1.76%, 250=98.14%, 500=0.06%, 750=0.02%, 1000=0.02% 00:10:45.625 cpu : usr=2.30%, sys=8.00%, ctx=6251, majf=0, minf=12 00:10:45.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.625 issued rwts: total=3072,3178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.625 job1: (groupid=0, jobs=1): err= 0: pid=80177: Sat Jul 13 05:57:37 2024 00:10:45.625 read: IOPS=3040, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec) 00:10:45.625 slat (nsec): min=10939, max=47470, avg=13465.66, stdev=4138.18 00:10:45.625 clat (usec): min=128, max=2080, avg=166.41, stdev=42.23 00:10:45.625 lat (usec): min=141, max=2100, avg=179.87, stdev=42.54 00:10:45.625 clat percentiles (usec): 00:10:45.625 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:10:45.626 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:10:45.626 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 200], 00:10:45.626 | 99.00th=[ 223], 99.50th=[ 289], 99.90th=[ 445], 99.95th=[ 519], 00:10:45.626 | 99.99th=[ 2073] 00:10:45.626 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:45.626 slat (nsec): min=12837, max=65570, avg=19560.86, stdev=4905.05 00:10:45.626 clat (usec): min=88, max=314, avg=124.38, stdev=16.02 00:10:45.626 lat (usec): min=106, max=332, avg=143.94, stdev=16.69 00:10:45.626 clat percentiles (usec): 00:10:45.626 | 1.00th=[ 97], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 112], 00:10:45.626 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 126], 00:10:45.626 | 70.00th=[ 131], 80.00th=[ 137], 90.00th=[ 147], 95.00th=[ 153], 00:10:45.626 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 196], 99.95th=[ 289], 00:10:45.626 | 99.99th=[ 314] 00:10:45.626 bw ( KiB/s): min=12288, max=12288, per=25.18%, avg=12288.00, stdev= 0.00, samples=1 00:10:45.626 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:45.626 lat (usec) : 100=1.11%, 250=98.48%, 500=0.38%, 750=0.02% 00:10:45.626 lat (msec) : 4=0.02% 00:10:45.626 cpu : usr=2.80%, sys=7.60%, ctx=6116, majf=0, minf=13 00:10:45.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.626 issued rwts: total=3044,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.626 job2: (groupid=0, jobs=1): err= 0: pid=80178: Sat Jul 13 05:57:37 2024 00:10:45.626 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:45.626 slat (nsec): min=11392, max=72505, avg=17438.85, stdev=5928.93 00:10:45.626 clat (usec): min=144, max=454, avg=182.85, stdev=18.73 00:10:45.626 lat (usec): min=156, max=468, avg=200.29, stdev=20.03 00:10:45.626 clat percentiles (usec): 00:10:45.626 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:10:45.626 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:10:45.626 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 217], 00:10:45.626 | 99.00th=[ 237], 99.50th=[ 247], 99.90th=[ 289], 99.95th=[ 314], 00:10:45.626 | 99.99th=[ 453] 00:10:45.626 write: IOPS=2959, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec); 0 zone resets 00:10:45.626 slat (usec): min=13, max=132, avg=25.32, stdev= 9.25 00:10:45.626 clat (usec): min=102, max=2926, avg=135.31, stdev=53.63 00:10:45.626 lat (usec): min=120, max=2950, avg=160.63, stdev=54.46 00:10:45.626 clat percentiles (usec): 00:10:45.626 | 1.00th=[ 109], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 122], 00:10:45.626 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:10:45.626 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 165], 00:10:45.626 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 223], 99.95th=[ 251], 00:10:45.626 | 99.99th=[ 2933] 00:10:45.626 bw ( KiB/s): min=12288, max=12288, per=25.18%, avg=12288.00, stdev= 0.00, samples=1 00:10:45.626 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:45.626 lat (usec) : 250=99.76%, 500=0.22% 00:10:45.626 lat (msec) : 4=0.02% 00:10:45.626 cpu : usr=2.70%, sys=9.10%, ctx=5522, majf=0, minf=9 00:10:45.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.626 issued rwts: total=2560,2962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.626 job3: (groupid=0, jobs=1): err= 0: pid=80179: Sat Jul 13 05:57:37 2024 00:10:45.626 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:45.626 slat (nsec): min=11199, max=72689, avg=14964.62, stdev=5260.98 00:10:45.626 clat (usec): min=125, max=693, avg=183.73, stdev=24.99 00:10:45.626 lat (usec): min=154, max=730, avg=198.70, stdev=25.87 00:10:45.626 clat percentiles (usec): 00:10:45.626 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:10:45.626 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:10:45.626 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 217], 00:10:45.626 | 99.00th=[ 233], 99.50th=[ 241], 99.90th=[ 603], 99.95th=[ 635], 00:10:45.626 | 99.99th=[ 693] 00:10:45.626 write: IOPS=2997, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:10:45.626 slat (nsec): min=13673, max=77478, avg=22590.00, stdev=7543.36 00:10:45.626 clat (usec): min=105, max=451, avg=137.75, stdev=16.90 00:10:45.626 lat (usec): min=123, max=474, avg=160.34, stdev=18.62 00:10:45.626 clat percentiles (usec): 00:10:45.626 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 125], 00:10:45.626 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 139], 00:10:45.626 | 70.00th=[ 143], 80.00th=[ 151], 90.00th=[ 161], 95.00th=[ 169], 00:10:45.626 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 212], 99.95th=[ 219], 00:10:45.626 | 99.99th=[ 453] 00:10:45.626 bw ( KiB/s): min=12288, max=12288, per=25.18%, avg=12288.00, stdev= 0.00, samples=1 00:10:45.626 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:45.626 lat (usec) : 250=99.82%, 500=0.13%, 750=0.05% 00:10:45.626 cpu : usr=3.20%, sys=7.50%, ctx=5561, majf=0, minf=11 00:10:45.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.626 issued rwts: total=2560,3000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.626 00:10:45.626 Run status group 0 (all jobs): 00:10:45.626 READ: bw=43.8MiB/s (46.0MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=43.9MiB (46.0MB), run=1001-1001msec 00:10:45.626 WRITE: bw=47.7MiB/s (50.0MB/s), 11.6MiB/s-12.4MiB/s (12.1MB/s-13.0MB/s), io=47.7MiB (50.0MB), run=1001-1001msec 00:10:45.626 00:10:45.626 Disk stats (read/write): 00:10:45.626 nvme0n1: ios=2610/2851, merge=0/0, ticks=461/373, in_queue=834, util=88.26% 00:10:45.626 nvme0n2: ios=2589/2712, merge=0/0, ticks=449/360, in_queue=809, util=87.83% 00:10:45.626 nvme0n3: ios=2168/2560, merge=0/0, ticks=403/375, in_queue=778, util=89.23% 00:10:45.626 nvme0n4: ios=2215/2560, merge=0/0, ticks=421/379, in_queue=800, util=89.80% 00:10:45.626 05:57:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:45.626 [global] 00:10:45.626 thread=1 00:10:45.626 invalidate=1 00:10:45.626 rw=write 00:10:45.626 time_based=1 00:10:45.626 runtime=1 00:10:45.626 ioengine=libaio 00:10:45.626 direct=1 00:10:45.626 bs=4096 00:10:45.626 iodepth=128 00:10:45.626 norandommap=0 00:10:45.626 numjobs=1 00:10:45.626 00:10:45.626 verify_dump=1 00:10:45.626 verify_backlog=512 00:10:45.626 verify_state_save=0 00:10:45.626 do_verify=1 00:10:45.626 verify=crc32c-intel 00:10:45.626 [job0] 00:10:45.626 filename=/dev/nvme0n1 00:10:45.626 [job1] 00:10:45.626 filename=/dev/nvme0n2 00:10:45.626 [job2] 00:10:45.626 filename=/dev/nvme0n3 00:10:45.626 [job3] 00:10:45.626 filename=/dev/nvme0n4 00:10:45.626 Could not set queue depth (nvme0n1) 00:10:45.626 Could not set queue depth (nvme0n2) 00:10:45.626 Could not set queue depth (nvme0n3) 00:10:45.626 Could not set queue depth (nvme0n4) 00:10:45.626 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.626 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.626 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.626 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.626 fio-3.35 00:10:45.626 Starting 4 threads 00:10:46.998 00:10:46.998 job0: (groupid=0, jobs=1): err= 0: pid=80238: Sat Jul 13 05:57:38 2024 00:10:46.998 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:10:46.998 slat (usec): min=5, max=3110, avg=94.02, stdev=441.19 00:10:46.998 clat (usec): min=8758, max=14699, avg=12594.88, stdev=711.98 00:10:46.998 lat (usec): min=8775, max=14752, avg=12688.90, stdev=568.21 00:10:46.998 clat percentiles (usec): 00:10:46.998 | 1.00th=[ 9765], 5.00th=[11863], 10.00th=[12125], 20.00th=[12256], 00:10:46.998 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12518], 60.00th=[12649], 00:10:46.998 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13435], 00:10:46.998 | 99.00th=[14615], 99.50th=[14615], 99.90th=[14746], 99.95th=[14746], 00:10:46.998 | 99.99th=[14746] 00:10:46.998 write: IOPS=5169, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1003msec); 0 zone resets 00:10:46.998 slat (usec): min=11, max=3238, avg=92.40, stdev=391.14 00:10:46.998 clat (usec): min=265, max=13835, avg=11995.66, stdev=1101.76 00:10:46.998 lat (usec): min=2489, max=13859, avg=12088.06, stdev=1029.05 00:10:46.998 clat percentiles (usec): 00:10:46.998 | 1.00th=[ 5932], 5.00th=[11076], 10.00th=[11469], 20.00th=[11731], 00:10:46.998 | 30.00th=[11863], 40.00th=[11994], 50.00th=[11994], 60.00th=[12125], 00:10:46.998 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13435], 00:10:46.998 | 99.00th=[13698], 99.50th=[13829], 99.90th=[13829], 99.95th=[13829], 00:10:46.998 | 99.99th=[13829] 00:10:46.998 bw ( KiB/s): min=20480, max=20521, per=26.12%, avg=20500.50, stdev=28.99, samples=2 00:10:46.998 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:10:46.998 lat (usec) : 500=0.01% 00:10:46.998 lat (msec) : 4=0.31%, 10=2.23%, 20=97.45% 00:10:46.998 cpu : usr=5.59%, sys=13.27%, ctx=326, majf=0, minf=13 00:10:46.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:46.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.998 issued rwts: total=5120,5185,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.998 job1: (groupid=0, jobs=1): err= 0: pid=80239: Sat Jul 13 05:57:38 2024 00:10:46.998 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:10:46.998 slat (usec): min=3, max=5056, avg=94.27, stdev=419.92 00:10:46.998 clat (usec): min=8309, max=18184, avg=12554.54, stdev=942.79 00:10:46.998 lat (usec): min=8851, max=18211, avg=12648.82, stdev=955.79 00:10:46.998 clat percentiles (usec): 00:10:46.998 | 1.00th=[10028], 5.00th=[10945], 10.00th=[11338], 20.00th=[12125], 00:10:46.998 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:10:46.998 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13566], 95.00th=[14222], 00:10:46.998 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16712], 99.95th=[17171], 00:10:46.998 | 99.99th=[18220] 00:10:46.998 write: IOPS=5277, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1004msec); 0 zone resets 00:10:46.998 slat (usec): min=10, max=7120, avg=90.49, stdev=531.96 00:10:46.998 clat (usec): min=336, max=20091, avg=11830.88, stdev=1586.43 00:10:46.998 lat (usec): min=3294, max=20122, avg=11921.36, stdev=1660.22 00:10:46.998 clat percentiles (usec): 00:10:46.998 | 1.00th=[ 4686], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11207], 00:10:46.998 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:10:46.998 | 70.00th=[12125], 80.00th=[12780], 90.00th=[13566], 95.00th=[14746], 00:10:46.998 | 99.00th=[15533], 99.50th=[16188], 99.90th=[18482], 99.95th=[19006], 00:10:46.998 | 99.99th=[20055] 00:10:46.998 bw ( KiB/s): min=20480, max=20888, per=26.35%, avg=20684.00, stdev=288.50, samples=2 00:10:46.998 iops : min= 5120, max= 5222, avg=5171.00, stdev=72.12, samples=2 00:10:46.998 lat (usec) : 500=0.01% 00:10:46.998 lat (msec) : 4=0.25%, 10=3.70%, 20=96.04%, 50=0.01% 00:10:46.998 cpu : usr=3.89%, sys=14.46%, ctx=320, majf=0, minf=6 00:10:46.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:46.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.998 issued rwts: total=5120,5299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.998 job2: (groupid=0, jobs=1): err= 0: pid=80240: Sat Jul 13 05:57:38 2024 00:10:46.998 read: IOPS=4308, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1003msec) 00:10:46.998 slat (usec): min=5, max=3631, avg=109.74, stdev=520.41 00:10:46.998 clat (usec): min=376, max=17010, avg=14506.68, stdev=1434.10 00:10:46.998 lat (usec): min=2980, max=17040, avg=14616.42, stdev=1339.13 00:10:46.998 clat percentiles (usec): 00:10:46.998 | 1.00th=[ 6652], 5.00th=[12518], 10.00th=[14222], 20.00th=[14353], 00:10:46.998 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14615], 60.00th=[14746], 00:10:46.998 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15401], 95.00th=[15664], 00:10:46.998 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16909], 99.95th=[16909], 00:10:46.998 | 99.99th=[16909] 00:10:46.998 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:46.998 slat (usec): min=10, max=3635, avg=106.17, stdev=459.66 00:10:46.998 clat (usec): min=10264, max=15419, avg=13870.38, stdev=628.56 00:10:46.998 lat (usec): min=11256, max=15480, avg=13976.55, stdev=431.97 00:10:46.998 clat percentiles (usec): 00:10:46.998 | 1.00th=[11076], 5.00th=[13304], 10.00th=[13435], 20.00th=[13566], 00:10:46.998 | 30.00th=[13698], 40.00th=[13829], 50.00th=[13829], 60.00th=[13960], 00:10:46.998 | 70.00th=[14091], 80.00th=[14222], 90.00th=[14615], 95.00th=[14746], 00:10:46.998 | 99.00th=[15139], 99.50th=[15270], 99.90th=[15401], 99.95th=[15401], 00:10:46.998 | 99.99th=[15401] 00:10:46.998 bw ( KiB/s): min=18424, max=18440, per=23.48%, avg=18432.00, stdev=11.31, samples=2 00:10:46.998 iops : min= 4606, max= 4610, avg=4608.00, stdev= 2.83, samples=2 00:10:46.998 lat (usec) : 500=0.01% 00:10:46.998 lat (msec) : 4=0.36%, 10=0.39%, 20=99.24% 00:10:46.998 cpu : usr=4.89%, sys=12.77%, ctx=281, majf=0, minf=11 00:10:46.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:46.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.998 issued rwts: total=4321,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.998 job3: (groupid=0, jobs=1): err= 0: pid=80241: Sat Jul 13 05:57:38 2024 00:10:46.998 read: IOPS=4333, BW=16.9MiB/s (17.8MB/s)(17.0MiB/1003msec) 00:10:46.998 slat (usec): min=5, max=3965, avg=107.90, stdev=432.49 00:10:46.998 clat (usec): min=677, max=18026, avg=14361.03, stdev=1535.52 00:10:46.998 lat (usec): min=2578, max=19533, avg=14468.93, stdev=1571.43 00:10:46.998 clat percentiles (usec): 00:10:46.998 | 1.00th=[ 7177], 5.00th=[12649], 10.00th=[13435], 20.00th=[13960], 00:10:46.998 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14353], 60.00th=[14484], 00:10:46.998 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15926], 95.00th=[16319], 00:10:46.998 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:10:46.998 | 99.99th=[17957] 00:10:46.998 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:46.998 slat (usec): min=8, max=9197, avg=107.57, stdev=538.37 00:10:46.998 clat (usec): min=11203, max=25466, avg=13943.92, stdev=1516.39 00:10:46.998 lat (usec): min=11228, max=25492, avg=14051.50, stdev=1600.31 00:10:46.998 clat percentiles (usec): 00:10:46.998 | 1.00th=[11338], 5.00th=[12649], 10.00th=[12911], 20.00th=[13173], 00:10:46.998 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:10:46.998 | 70.00th=[13829], 80.00th=[14222], 90.00th=[15401], 95.00th=[16581], 00:10:46.998 | 99.00th=[21627], 99.50th=[21627], 99.90th=[21890], 99.95th=[25035], 00:10:46.998 | 99.99th=[25560] 00:10:46.998 bw ( KiB/s): min=18108, max=18792, per=23.51%, avg=18450.00, stdev=483.66, samples=2 00:10:46.998 iops : min= 4527, max= 4698, avg=4612.50, stdev=120.92, samples=2 00:10:46.998 lat (usec) : 750=0.01% 00:10:46.998 lat (msec) : 4=0.22%, 10=0.71%, 20=98.08%, 50=0.97% 00:10:46.998 cpu : usr=4.39%, sys=12.67%, ctx=321, majf=0, minf=7 00:10:46.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:46.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.998 issued rwts: total=4347,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.998 00:10:46.998 Run status group 0 (all jobs): 00:10:46.998 READ: bw=73.6MiB/s (77.1MB/s), 16.8MiB/s-19.9MiB/s (17.6MB/s-20.9MB/s), io=73.9MiB (77.4MB), run=1003-1004msec 00:10:46.998 WRITE: bw=76.6MiB/s (80.4MB/s), 17.9MiB/s-20.6MiB/s (18.8MB/s-21.6MB/s), io=77.0MiB (80.7MB), run=1003-1004msec 00:10:46.998 00:10:46.998 Disk stats (read/write): 00:10:46.998 nvme0n1: ios=4370/4608, merge=0/0, ticks=12002/11667, in_queue=23669, util=89.07% 00:10:46.998 nvme0n2: ios=4392/4608, merge=0/0, ticks=26268/22316, in_queue=48584, util=88.16% 00:10:46.999 nvme0n3: ios=3637/4096, merge=0/0, ticks=11852/12251, in_queue=24103, util=89.50% 00:10:46.999 nvme0n4: ios=3615/4096, merge=0/0, ticks=16387/16031, in_queue=32418, util=89.43% 00:10:46.999 05:57:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:46.999 [global] 00:10:46.999 thread=1 00:10:46.999 invalidate=1 00:10:46.999 rw=randwrite 00:10:46.999 time_based=1 00:10:46.999 runtime=1 00:10:46.999 ioengine=libaio 00:10:46.999 direct=1 00:10:46.999 bs=4096 00:10:46.999 iodepth=128 00:10:46.999 norandommap=0 00:10:46.999 numjobs=1 00:10:46.999 00:10:46.999 verify_dump=1 00:10:46.999 verify_backlog=512 00:10:46.999 verify_state_save=0 00:10:46.999 do_verify=1 00:10:46.999 verify=crc32c-intel 00:10:46.999 [job0] 00:10:46.999 filename=/dev/nvme0n1 00:10:46.999 [job1] 00:10:46.999 filename=/dev/nvme0n2 00:10:46.999 [job2] 00:10:46.999 filename=/dev/nvme0n3 00:10:46.999 [job3] 00:10:46.999 filename=/dev/nvme0n4 00:10:46.999 Could not set queue depth (nvme0n1) 00:10:46.999 Could not set queue depth (nvme0n2) 00:10:46.999 Could not set queue depth (nvme0n3) 00:10:46.999 Could not set queue depth (nvme0n4) 00:10:46.999 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.999 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.999 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.999 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.999 fio-3.35 00:10:46.999 Starting 4 threads 00:10:48.369 00:10:48.369 job0: (groupid=0, jobs=1): err= 0: pid=80299: Sat Jul 13 05:57:39 2024 00:10:48.369 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:10:48.369 slat (usec): min=8, max=27459, avg=130.98, stdev=1014.06 00:10:48.370 clat (usec): min=9266, max=53459, avg=18316.89, stdev=6462.67 00:10:48.370 lat (usec): min=9281, max=53485, avg=18447.87, stdev=6529.27 00:10:48.370 clat percentiles (usec): 00:10:48.370 | 1.00th=[10028], 5.00th=[12649], 10.00th=[12911], 20.00th=[13304], 00:10:48.370 | 30.00th=[13566], 40.00th=[14222], 50.00th=[18220], 60.00th=[19268], 00:10:48.370 | 70.00th=[19792], 80.00th=[20579], 90.00th=[27395], 95.00th=[35914], 00:10:48.370 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:10:48.370 | 99.99th=[53216] 00:10:48.370 write: IOPS=4329, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1005msec); 0 zone resets 00:10:48.370 slat (usec): min=11, max=11379, avg=99.30, stdev=608.53 00:10:48.370 clat (usec): min=2491, max=37023, avg=12035.97, stdev=3543.30 00:10:48.370 lat (usec): min=7562, max=37056, avg=12135.27, stdev=3522.70 00:10:48.370 clat percentiles (usec): 00:10:48.370 | 1.00th=[ 7635], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9372], 00:10:48.370 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10552], 60.00th=[10945], 00:10:48.370 | 70.00th=[13173], 80.00th=[16057], 90.00th=[17433], 95.00th=[18482], 00:10:48.370 | 99.00th=[22676], 99.50th=[22676], 99.90th=[22938], 99.95th=[22938], 00:10:48.370 | 99.99th=[36963] 00:10:48.370 bw ( KiB/s): min=14088, max=19696, per=31.02%, avg=16892.00, stdev=3965.45, samples=2 00:10:48.370 iops : min= 3522, max= 4924, avg=4223.00, stdev=991.36, samples=2 00:10:48.370 lat (msec) : 4=0.01%, 10=21.29%, 20=65.43%, 50=13.25%, 100=0.02% 00:10:48.370 cpu : usr=3.39%, sys=11.85%, ctx=181, majf=0, minf=9 00:10:48.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:48.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.370 issued rwts: total=4096,4351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.370 job1: (groupid=0, jobs=1): err= 0: pid=80300: Sat Jul 13 05:57:39 2024 00:10:48.370 read: IOPS=1113, BW=4453KiB/s (4560kB/s)(4480KiB/1006msec) 00:10:48.370 slat (usec): min=6, max=22794, avg=367.82, stdev=1565.58 00:10:48.370 clat (usec): min=3662, max=71129, avg=44587.06, stdev=10619.24 00:10:48.370 lat (usec): min=9654, max=72189, avg=44954.88, stdev=10664.74 00:10:48.370 clat percentiles (usec): 00:10:48.370 | 1.00th=[ 9896], 5.00th=[33424], 10.00th=[35914], 20.00th=[38011], 00:10:48.370 | 30.00th=[39060], 40.00th=[39584], 50.00th=[40633], 60.00th=[43254], 00:10:48.370 | 70.00th=[49021], 80.00th=[54789], 90.00th=[58983], 95.00th=[64226], 00:10:48.370 | 99.00th=[67634], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:10:48.370 | 99.99th=[70779] 00:10:48.370 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:10:48.370 slat (usec): min=8, max=18771, avg=376.41, stdev=1591.03 00:10:48.370 clat (msec): min=14, max=112, avg=49.97, stdev=28.69 00:10:48.370 lat (msec): min=14, max=112, avg=50.35, stdev=28.91 00:10:48.370 clat percentiles (msec): 00:10:48.370 | 1.00th=[ 19], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 27], 00:10:48.370 | 30.00th=[ 31], 40.00th=[ 33], 50.00th=[ 35], 60.00th=[ 37], 00:10:48.370 | 70.00th=[ 74], 80.00th=[ 92], 90.00th=[ 95], 95.00th=[ 97], 00:10:48.370 | 99.00th=[ 99], 99.50th=[ 101], 99.90th=[ 109], 99.95th=[ 113], 00:10:48.370 | 99.99th=[ 113] 00:10:48.370 bw ( KiB/s): min= 3840, max= 8208, per=11.06%, avg=6024.00, stdev=3088.64, samples=2 00:10:48.370 iops : min= 960, max= 2052, avg=1506.00, stdev=772.16, samples=2 00:10:48.370 lat (msec) : 4=0.04%, 10=0.64%, 20=1.47%, 50=66.68%, 100=30.87% 00:10:48.370 lat (msec) : 250=0.30% 00:10:48.370 cpu : usr=1.29%, sys=4.18%, ctx=318, majf=0, minf=13 00:10:48.370 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:10:48.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.370 issued rwts: total=1120,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.370 job2: (groupid=0, jobs=1): err= 0: pid=80301: Sat Jul 13 05:57:39 2024 00:10:48.370 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:10:48.370 slat (usec): min=7, max=5685, avg=76.64, stdev=468.64 00:10:48.370 clat (usec): min=6469, max=17499, avg=10784.58, stdev=1153.07 00:10:48.370 lat (usec): min=6480, max=20713, avg=10861.22, stdev=1175.88 00:10:48.370 clat percentiles (usec): 00:10:48.370 | 1.00th=[ 6849], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:10:48.370 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:10:48.370 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:10:48.370 | 99.00th=[16188], 99.50th=[16712], 99.90th=[17433], 99.95th=[17433], 00:10:48.370 | 99.99th=[17433] 00:10:48.370 write: IOPS=6252, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1003msec); 0 zone resets 00:10:48.370 slat (usec): min=10, max=6426, avg=77.49, stdev=428.47 00:10:48.370 clat (usec): min=479, max=13349, avg=9708.93, stdev=1032.51 00:10:48.370 lat (usec): min=3116, max=13561, avg=9786.42, stdev=956.50 00:10:48.370 clat percentiles (usec): 00:10:48.370 | 1.00th=[ 5866], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9110], 00:10:48.370 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:10:48.370 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10683], 95.00th=[10945], 00:10:48.370 | 99.00th=[12780], 99.50th=[13173], 99.90th=[13304], 99.95th=[13304], 00:10:48.370 | 99.99th=[13304] 00:10:48.370 bw ( KiB/s): min=24625, max=24632, per=45.23%, avg=24628.50, stdev= 4.95, samples=2 00:10:48.370 iops : min= 6156, max= 6158, avg=6157.00, stdev= 1.41, samples=2 00:10:48.370 lat (usec) : 500=0.01% 00:10:48.370 lat (msec) : 4=0.32%, 10=38.91%, 20=60.76% 00:10:48.370 cpu : usr=4.79%, sys=16.17%, ctx=256, majf=0, minf=15 00:10:48.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:48.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.370 issued rwts: total=6144,6271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.370 job3: (groupid=0, jobs=1): err= 0: pid=80302: Sat Jul 13 05:57:39 2024 00:10:48.370 read: IOPS=1141, BW=4566KiB/s (4676kB/s)(4580KiB/1003msec) 00:10:48.370 slat (usec): min=7, max=15941, avg=348.21, stdev=1400.55 00:10:48.370 clat (usec): min=1311, max=70898, avg=43210.57, stdev=12832.06 00:10:48.370 lat (usec): min=3232, max=72556, avg=43558.77, stdev=12849.03 00:10:48.370 clat percentiles (usec): 00:10:48.370 | 1.00th=[ 3458], 5.00th=[16057], 10.00th=[33424], 20.00th=[36439], 00:10:48.370 | 30.00th=[38011], 40.00th=[38536], 50.00th=[39584], 60.00th=[42730], 00:10:48.370 | 70.00th=[49546], 80.00th=[53740], 90.00th=[61604], 95.00th=[64226], 00:10:48.370 | 99.00th=[68682], 99.50th=[69731], 99.90th=[70779], 99.95th=[70779], 00:10:48.370 | 99.99th=[70779] 00:10:48.370 write: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec); 0 zone resets 00:10:48.370 slat (usec): min=5, max=21010, avg=384.39, stdev=1635.10 00:10:48.370 clat (msec): min=15, max=115, avg=49.01, stdev=29.51 00:10:48.370 lat (msec): min=16, max=116, avg=49.39, stdev=29.72 00:10:48.370 clat percentiles (msec): 00:10:48.370 | 1.00th=[ 19], 5.00th=[ 21], 10.00th=[ 24], 20.00th=[ 27], 00:10:48.370 | 30.00th=[ 27], 40.00th=[ 29], 50.00th=[ 35], 60.00th=[ 36], 00:10:48.370 | 70.00th=[ 70], 80.00th=[ 92], 90.00th=[ 94], 95.00th=[ 97], 00:10:48.370 | 99.00th=[ 100], 99.50th=[ 104], 99.90th=[ 112], 99.95th=[ 115], 00:10:48.370 | 99.99th=[ 115] 00:10:48.370 bw ( KiB/s): min= 4048, max= 8192, per=11.24%, avg=6120.00, stdev=2930.25, samples=2 00:10:48.370 iops : min= 1012, max= 2048, avg=1530.00, stdev=732.56, samples=2 00:10:48.370 lat (msec) : 2=0.04%, 4=1.01%, 20=3.43%, 50=64.94%, 100=30.21% 00:10:48.370 lat (msec) : 250=0.37% 00:10:48.370 cpu : usr=1.80%, sys=3.99%, ctx=302, majf=0, minf=9 00:10:48.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:10:48.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.370 issued rwts: total=1145,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.370 00:10:48.370 Run status group 0 (all jobs): 00:10:48.370 READ: bw=48.6MiB/s (50.9MB/s), 4453KiB/s-23.9MiB/s (4560kB/s-25.1MB/s), io=48.8MiB (51.2MB), run=1003-1006msec 00:10:48.370 WRITE: bw=53.2MiB/s (55.8MB/s), 6107KiB/s-24.4MiB/s (6254kB/s-25.6MB/s), io=53.5MiB (56.1MB), run=1003-1006msec 00:10:48.370 00:10:48.370 Disk stats (read/write): 00:10:48.370 nvme0n1: ios=3374/3584, merge=0/0, ticks=61367/41524, in_queue=102891, util=89.47% 00:10:48.370 nvme0n2: ios=1067/1377, merge=0/0, ticks=21932/29804, in_queue=51736, util=88.26% 00:10:48.370 nvme0n3: ios=5141/5504, merge=0/0, ticks=51657/48921, in_queue=100578, util=89.48% 00:10:48.370 nvme0n4: ios=1024/1377, merge=0/0, ticks=21942/29380, in_queue=51322, util=87.66% 00:10:48.370 05:57:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:48.370 05:57:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=80315 00:10:48.370 05:57:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:48.370 05:57:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:48.370 [global] 00:10:48.370 thread=1 00:10:48.370 invalidate=1 00:10:48.370 rw=read 00:10:48.370 time_based=1 00:10:48.370 runtime=10 00:10:48.370 ioengine=libaio 00:10:48.370 direct=1 00:10:48.370 bs=4096 00:10:48.370 iodepth=1 00:10:48.370 norandommap=1 00:10:48.370 numjobs=1 00:10:48.370 00:10:48.370 [job0] 00:10:48.370 filename=/dev/nvme0n1 00:10:48.370 [job1] 00:10:48.370 filename=/dev/nvme0n2 00:10:48.370 [job2] 00:10:48.370 filename=/dev/nvme0n3 00:10:48.370 [job3] 00:10:48.370 filename=/dev/nvme0n4 00:10:48.370 Could not set queue depth (nvme0n1) 00:10:48.370 Could not set queue depth (nvme0n2) 00:10:48.370 Could not set queue depth (nvme0n3) 00:10:48.370 Could not set queue depth (nvme0n4) 00:10:48.370 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.370 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.370 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.370 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.370 fio-3.35 00:10:48.370 Starting 4 threads 00:10:51.657 05:57:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:51.657 fio: pid=80358, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:51.657 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=47673344, buflen=4096 00:10:51.657 05:57:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:51.915 fio: pid=80357, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:51.915 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=69844992, buflen=4096 00:10:51.915 05:57:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.915 05:57:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:51.915 fio: pid=80355, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:51.915 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=10903552, buflen=4096 00:10:52.173 05:57:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.173 05:57:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:52.173 fio: pid=80356, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:52.173 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=65368064, buflen=4096 00:10:52.432 05:57:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.432 05:57:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:52.432 00:10:52.432 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80355: Sat Jul 13 05:57:43 2024 00:10:52.432 read: IOPS=5473, BW=21.4MiB/s (22.4MB/s)(74.4MiB/3480msec) 00:10:52.432 slat (usec): min=10, max=12677, avg=17.24, stdev=147.21 00:10:52.432 clat (usec): min=122, max=3674, avg=163.98, stdev=41.57 00:10:52.432 lat (usec): min=135, max=12845, avg=181.22, stdev=153.30 00:10:52.432 clat percentiles (usec): 00:10:52.432 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:10:52.432 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:10:52.432 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 196], 00:10:52.432 | 99.00th=[ 215], 99.50th=[ 223], 99.90th=[ 322], 99.95th=[ 693], 00:10:52.432 | 99.99th=[ 2573] 00:10:52.432 bw ( KiB/s): min=21768, max=22880, per=32.45%, avg=22184.00, stdev=377.21, samples=6 00:10:52.432 iops : min= 5442, max= 5720, avg=5546.00, stdev=94.30, samples=6 00:10:52.432 lat (usec) : 250=99.85%, 500=0.08%, 750=0.02%, 1000=0.02% 00:10:52.432 lat (msec) : 2=0.02%, 4=0.01% 00:10:52.432 cpu : usr=2.30%, sys=7.10%, ctx=19055, majf=0, minf=1 00:10:52.432 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.432 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.432 issued rwts: total=19047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.432 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.432 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80356: Sat Jul 13 05:57:43 2024 00:10:52.432 read: IOPS=4282, BW=16.7MiB/s (17.5MB/s)(62.3MiB/3727msec) 00:10:52.432 slat (usec): min=7, max=11837, avg=15.81, stdev=166.17 00:10:52.432 clat (usec): min=116, max=3902, avg=216.45, stdev=78.87 00:10:52.432 lat (usec): min=130, max=12011, avg=232.26, stdev=183.06 00:10:52.432 clat percentiles (usec): 00:10:52.432 | 1.00th=[ 129], 5.00th=[ 137], 10.00th=[ 145], 20.00th=[ 155], 00:10:52.432 | 30.00th=[ 167], 40.00th=[ 192], 50.00th=[ 231], 60.00th=[ 243], 00:10:52.432 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 289], 00:10:52.432 | 99.00th=[ 322], 99.50th=[ 359], 99.90th=[ 1012], 99.95th=[ 1254], 00:10:52.432 | 99.99th=[ 3130] 00:10:52.432 bw ( KiB/s): min=14824, max=21440, per=24.56%, avg=16789.14, stdev=2940.20, samples=7 00:10:52.432 iops : min= 3706, max= 5360, avg=4197.29, stdev=735.05, samples=7 00:10:52.432 lat (usec) : 250=68.42%, 500=31.36%, 750=0.08%, 1000=0.03% 00:10:52.432 lat (msec) : 2=0.08%, 4=0.03% 00:10:52.432 cpu : usr=1.21%, sys=5.02%, ctx=15969, majf=0, minf=1 00:10:52.432 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.432 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.432 issued rwts: total=15960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.432 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.432 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80357: Sat Jul 13 05:57:43 2024 00:10:52.432 read: IOPS=5243, BW=20.5MiB/s (21.5MB/s)(66.6MiB/3252msec) 00:10:52.432 slat (usec): min=10, max=12109, avg=15.20, stdev=106.59 00:10:52.432 clat (usec): min=33, max=2042, avg=174.16, stdev=30.06 00:10:52.432 lat (usec): min=148, max=12299, avg=189.36, stdev=110.97 00:10:52.432 clat percentiles (usec): 00:10:52.432 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 159], 00:10:52.432 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:10:52.432 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 206], 00:10:52.432 | 99.00th=[ 227], 99.50th=[ 237], 99.90th=[ 363], 99.95th=[ 594], 00:10:52.432 | 99.99th=[ 1631] 00:10:52.432 bw ( KiB/s): min=20696, max=21560, per=30.96%, avg=21162.67, stdev=308.46, samples=6 00:10:52.432 iops : min= 5174, max= 5390, avg=5290.67, stdev=77.11, samples=6 00:10:52.432 lat (usec) : 50=0.01%, 250=99.67%, 500=0.26%, 750=0.02%, 1000=0.01% 00:10:52.432 lat (msec) : 2=0.02%, 4=0.01% 00:10:52.432 cpu : usr=1.57%, sys=6.27%, ctx=17059, majf=0, minf=1 00:10:52.432 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.432 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.432 issued rwts: total=17053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.432 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.432 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80358: Sat Jul 13 05:57:43 2024 00:10:52.432 read: IOPS=3904, BW=15.3MiB/s (16.0MB/s)(45.5MiB/2981msec) 00:10:52.432 slat (usec): min=8, max=191, avg=14.66, stdev= 6.27 00:10:52.432 clat (usec): min=88, max=8008, avg=239.87, stdev=88.92 00:10:52.432 lat (usec): min=162, max=8062, avg=254.53, stdev=88.59 00:10:52.432 clat percentiles (usec): 00:10:52.432 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 212], 00:10:52.432 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 251], 00:10:52.432 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:10:52.432 | 99.00th=[ 318], 99.50th=[ 351], 99.90th=[ 570], 99.95th=[ 742], 00:10:52.432 | 99.99th=[ 3294] 00:10:52.432 bw ( KiB/s): min=14816, max=19024, per=23.05%, avg=15760.00, stdev=1833.62, samples=5 00:10:52.432 iops : min= 3704, max= 4756, avg=3940.00, stdev=458.40, samples=5 00:10:52.432 lat (usec) : 100=0.01%, 250=59.92%, 500=39.92%, 750=0.09%, 1000=0.01% 00:10:52.432 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01% 00:10:52.432 cpu : usr=1.24%, sys=5.47%, ctx=11645, majf=0, minf=1 00:10:52.432 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.432 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.433 issued rwts: total=11640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.433 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.433 00:10:52.433 Run status group 0 (all jobs): 00:10:52.433 READ: bw=66.8MiB/s (70.0MB/s), 15.3MiB/s-21.4MiB/s (16.0MB/s-22.4MB/s), io=249MiB (261MB), run=2981-3727msec 00:10:52.433 00:10:52.433 Disk stats (read/write): 00:10:52.433 nvme0n1: ios=18402/0, merge=0/0, ticks=3160/0, in_queue=3160, util=95.22% 00:10:52.433 nvme0n2: ios=15280/0, merge=0/0, ticks=3291/0, in_queue=3291, util=95.61% 00:10:52.433 nvme0n3: ios=16350/0, merge=0/0, ticks=2950/0, in_queue=2950, util=96.30% 00:10:52.433 nvme0n4: ios=11220/0, merge=0/0, ticks=2675/0, in_queue=2675, util=96.49% 00:10:52.433 05:57:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.433 05:57:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:52.999 05:57:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.999 05:57:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:52.999 05:57:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.999 05:57:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:53.257 05:57:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.257 05:57:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:53.515 05:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:53.515 05:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 80315 00:10:53.515 05:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:53.515 05:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:53.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.515 05:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:53.515 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:53.515 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.515 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:53.515 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.515 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:53.515 nvmf hotplug test: fio failed as expected 00:10:53.515 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:53.515 05:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:53.515 05:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:53.515 05:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:54.082 rmmod nvme_tcp 00:10:54.082 rmmod nvme_fabrics 00:10:54.082 rmmod nvme_keyring 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 79935 ']' 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 79935 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 79935 ']' 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 79935 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79935 00:10:54.082 killing process with pid 79935 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79935' 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 79935 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 79935 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:54.082 ************************************ 00:10:54.082 END TEST nvmf_fio_target 00:10:54.082 ************************************ 00:10:54.082 00:10:54.082 real 0m18.564s 00:10:54.082 user 1m9.148s 00:10:54.082 sys 0m11.188s 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.082 05:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.341 05:57:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:54.341 05:57:45 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:54.341 05:57:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:54.341 05:57:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.341 05:57:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:54.341 ************************************ 00:10:54.341 START TEST nvmf_bdevio 00:10:54.341 ************************************ 00:10:54.341 05:57:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:54.341 * Looking for test storage... 00:10:54.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:54.341 05:57:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:54.341 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:54.342 Cannot find device "nvmf_tgt_br" 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:54.342 Cannot find device "nvmf_tgt_br2" 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:54.342 05:57:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:54.342 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:54.342 Cannot find device "nvmf_tgt_br" 00:10:54.342 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:54.342 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:54.342 Cannot find device "nvmf_tgt_br2" 00:10:54.342 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:54.342 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:54.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:54.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:54.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:10:54.601 00:10:54.601 --- 10.0.0.2 ping statistics --- 00:10:54.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.601 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:54.601 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:54.601 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:10:54.601 00:10:54.601 --- 10.0.0.3 ping statistics --- 00:10:54.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.601 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:54.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:54.601 00:10:54.601 --- 10.0.0.1 ping statistics --- 00:10:54.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.601 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:54.601 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:54.860 05:57:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:54.860 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:54.860 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:54.860 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.860 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=80626 00:10:54.860 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 80626 00:10:54.860 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 80626 ']' 00:10:54.860 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.860 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:54.860 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.860 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.860 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.860 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.860 [2024-07-13 05:57:46.391716] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:54.860 [2024-07-13 05:57:46.391802] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.860 [2024-07-13 05:57:46.533416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.860 [2024-07-13 05:57:46.570608] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.860 [2024-07-13 05:57:46.570658] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.860 [2024-07-13 05:57:46.570684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.860 [2024-07-13 05:57:46.570691] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.860 [2024-07-13 05:57:46.570697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.861 [2024-07-13 05:57:46.570851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:54.861 [2024-07-13 05:57:46.571597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:54.861 [2024-07-13 05:57:46.571694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:54.861 [2024-07-13 05:57:46.571716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.120 [2024-07-13 05:57:46.602491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.120 [2024-07-13 05:57:46.699351] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.120 Malloc0 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.120 [2024-07-13 05:57:46.764518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:55.120 { 00:10:55.120 "params": { 00:10:55.120 "name": "Nvme$subsystem", 00:10:55.120 "trtype": "$TEST_TRANSPORT", 00:10:55.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:55.120 "adrfam": "ipv4", 00:10:55.120 "trsvcid": "$NVMF_PORT", 00:10:55.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:55.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:55.120 "hdgst": ${hdgst:-false}, 00:10:55.120 "ddgst": ${ddgst:-false} 00:10:55.120 }, 00:10:55.120 "method": "bdev_nvme_attach_controller" 00:10:55.120 } 00:10:55.120 EOF 00:10:55.120 )") 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:55.120 05:57:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:55.120 "params": { 00:10:55.120 "name": "Nvme1", 00:10:55.120 "trtype": "tcp", 00:10:55.120 "traddr": "10.0.0.2", 00:10:55.120 "adrfam": "ipv4", 00:10:55.120 "trsvcid": "4420", 00:10:55.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:55.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:55.120 "hdgst": false, 00:10:55.120 "ddgst": false 00:10:55.120 }, 00:10:55.120 "method": "bdev_nvme_attach_controller" 00:10:55.120 }' 00:10:55.120 [2024-07-13 05:57:46.819034] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:55.120 [2024-07-13 05:57:46.819119] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80650 ] 00:10:55.379 [2024-07-13 05:57:46.961704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:55.379 [2024-07-13 05:57:47.006653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.379 [2024-07-13 05:57:47.006808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.379 [2024-07-13 05:57:47.006815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.379 [2024-07-13 05:57:47.050609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:55.638 I/O targets: 00:10:55.638 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:55.638 00:10:55.638 00:10:55.638 CUnit - A unit testing framework for C - Version 2.1-3 00:10:55.638 http://cunit.sourceforge.net/ 00:10:55.638 00:10:55.638 00:10:55.638 Suite: bdevio tests on: Nvme1n1 00:10:55.638 Test: blockdev write read block ...passed 00:10:55.638 Test: blockdev write zeroes read block ...passed 00:10:55.638 Test: blockdev write zeroes read no split ...passed 00:10:55.638 Test: blockdev write zeroes read split ...passed 00:10:55.638 Test: blockdev write zeroes read split partial ...passed 00:10:55.638 Test: blockdev reset ...[2024-07-13 05:57:47.184281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:55.638 [2024-07-13 05:57:47.184678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1656cd0 (9): Bad file descriptor 00:10:55.638 [2024-07-13 05:57:47.200878] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:55.638 passed 00:10:55.638 Test: blockdev write read 8 blocks ...passed 00:10:55.638 Test: blockdev write read size > 128k ...passed 00:10:55.638 Test: blockdev write read invalid size ...passed 00:10:55.638 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:55.638 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:55.638 Test: blockdev write read max offset ...passed 00:10:55.638 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:55.638 Test: blockdev writev readv 8 blocks ...passed 00:10:55.638 Test: blockdev writev readv 30 x 1block ...passed 00:10:55.638 Test: blockdev writev readv block ...passed 00:10:55.638 Test: blockdev writev readv size > 128k ...passed 00:10:55.638 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:55.638 Test: blockdev comparev and writev ...[2024-07-13 05:57:47.209312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.638 [2024-07-13 05:57:47.209549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:55.638 [2024-07-13 05:57:47.209584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.638 [2024-07-13 05:57:47.209599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:55.638 [2024-07-13 05:57:47.209946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.638 [2024-07-13 05:57:47.209978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:55.638 [2024-07-13 05:57:47.210011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.638 [2024-07-13 05:57:47.210026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:55.638 [2024-07-13 05:57:47.210329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.638 [2024-07-13 05:57:47.210356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:55.638 [2024-07-13 05:57:47.210394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.638 [2024-07-13 05:57:47.210408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:55.638 [2024-07-13 05:57:47.210698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.638 [2024-07-13 05:57:47.210724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:55.638 [2024-07-13 05:57:47.210745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.638 [2024-07-13 05:57:47.210758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:55.638 passed 00:10:55.638 Test: blockdev nvme passthru rw ...passed 00:10:55.638 Test: blockdev nvme passthru vendor specific ...[2024-07-13 05:57:47.212021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:55.638 [2024-07-13 05:57:47.212063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:55.638 [2024-07-13 05:57:47.212188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:55.638 [2024-07-13 05:57:47.212209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:55.638 [2024-07-13 05:57:47.212324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:55.638 [2024-07-13 05:57:47.212357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:55.638 [2024-07-13 05:57:47.212490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:55.638 [2024-07-13 05:57:47.212517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:55.638 passed 00:10:55.638 Test: blockdev nvme admin passthru ...passed 00:10:55.638 Test: blockdev copy ...passed 00:10:55.638 00:10:55.638 Run Summary: Type Total Ran Passed Failed Inactive 00:10:55.638 suites 1 1 n/a 0 0 00:10:55.638 tests 23 23 23 0 0 00:10:55.638 asserts 152 152 152 0 n/a 00:10:55.638 00:10:55.638 Elapsed time = 0.146 seconds 00:10:55.638 05:57:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.638 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.638 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:55.897 rmmod nvme_tcp 00:10:55.897 rmmod nvme_fabrics 00:10:55.897 rmmod nvme_keyring 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 80626 ']' 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 80626 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 80626 ']' 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 80626 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80626 00:10:55.897 killing process with pid 80626 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80626' 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 80626 00:10:55.897 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 80626 00:10:56.157 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.157 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.157 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.157 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.157 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.157 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.157 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.157 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.157 05:57:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:56.157 00:10:56.157 real 0m1.876s 00:10:56.157 user 0m5.303s 00:10:56.157 sys 0m0.654s 00:10:56.157 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:56.157 05:57:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.157 ************************************ 00:10:56.157 END TEST nvmf_bdevio 00:10:56.157 ************************************ 00:10:56.157 05:57:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:56.157 05:57:47 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:56.157 05:57:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:56.157 05:57:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.157 05:57:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:56.157 ************************************ 00:10:56.157 START TEST nvmf_auth_target 00:10:56.157 ************************************ 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:56.157 * Looking for test storage... 00:10:56.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:56.157 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:56.158 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:56.417 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:56.417 Cannot find device "nvmf_tgt_br" 00:10:56.417 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:56.417 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:56.417 Cannot find device "nvmf_tgt_br2" 00:10:56.417 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:56.417 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:56.417 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:56.417 Cannot find device "nvmf_tgt_br" 00:10:56.417 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:56.417 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:56.417 Cannot find device "nvmf_tgt_br2" 00:10:56.417 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:56.417 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:56.417 05:57:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:56.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:56.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:56.417 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:56.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:10:56.676 00:10:56.676 --- 10.0.0.2 ping statistics --- 00:10:56.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.676 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:56.676 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:56.676 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:10:56.676 00:10:56.676 --- 10.0.0.3 ping statistics --- 00:10:56.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.676 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:56.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:56.676 00:10:56.676 --- 10.0.0.1 ping statistics --- 00:10:56.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.676 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:56.676 05:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:56.677 05:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.677 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=80820 00:10:56.677 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 80820 00:10:56.677 05:57:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:56.677 05:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 80820 ']' 00:10:56.677 05:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.677 05:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.677 05:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.677 05:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.677 05:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.614 05:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:57.614 05:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:57.614 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:57.614 05:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:57.614 05:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.614 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.614 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=80852 00:10:57.614 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:57.614 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:57.614 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:57.614 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:57.614 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:57.615 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:57.615 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:57.615 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:57.615 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:57.615 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=24842fc294e65c3355dd0860138fa48ae099aebab165c449 00:10:57.615 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:57.615 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.KFI 00:10:57.615 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 24842fc294e65c3355dd0860138fa48ae099aebab165c449 0 00:10:57.615 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 24842fc294e65c3355dd0860138fa48ae099aebab165c449 0 00:10:57.615 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:57.615 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:57.615 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=24842fc294e65c3355dd0860138fa48ae099aebab165c449 00:10:57.615 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:57.615 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.KFI 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.KFI 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.KFI 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8fcf1ad20fe4f9533c658ab8496df7e4b4d315a536af49ba48a298f553342093 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.SY8 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8fcf1ad20fe4f9533c658ab8496df7e4b4d315a536af49ba48a298f553342093 3 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8fcf1ad20fe4f9533c658ab8496df7e4b4d315a536af49ba48a298f553342093 3 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8fcf1ad20fe4f9533c658ab8496df7e4b4d315a536af49ba48a298f553342093 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.SY8 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.SY8 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.SY8 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=190801cb7638e0b633f224fa93fb48dd 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.y2Q 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 190801cb7638e0b633f224fa93fb48dd 1 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 190801cb7638e0b633f224fa93fb48dd 1 00:10:57.874 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=190801cb7638e0b633f224fa93fb48dd 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.y2Q 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.y2Q 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.y2Q 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8cb01811fed051b4eb961b3ed6d12c9ecbcb5cede6a4e8ae 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ww2 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8cb01811fed051b4eb961b3ed6d12c9ecbcb5cede6a4e8ae 2 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8cb01811fed051b4eb961b3ed6d12c9ecbcb5cede6a4e8ae 2 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8cb01811fed051b4eb961b3ed6d12c9ecbcb5cede6a4e8ae 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ww2 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ww2 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ww2 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f2cf32f4d48249887471864949cd99eacca3336157894011 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fL1 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f2cf32f4d48249887471864949cd99eacca3336157894011 2 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f2cf32f4d48249887471864949cd99eacca3336157894011 2 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f2cf32f4d48249887471864949cd99eacca3336157894011 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:57.875 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fL1 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fL1 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.fL1 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f59e9639cf72d2af3ab60d4773560f5c 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.KT8 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f59e9639cf72d2af3ab60d4773560f5c 1 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f59e9639cf72d2af3ab60d4773560f5c 1 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f59e9639cf72d2af3ab60d4773560f5c 00:10:58.134 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.KT8 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.KT8 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.KT8 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=60b13eb0105f73aaea85731cbb842970767792c23b1464b7301d00213debe9eb 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qW6 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 60b13eb0105f73aaea85731cbb842970767792c23b1464b7301d00213debe9eb 3 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 60b13eb0105f73aaea85731cbb842970767792c23b1464b7301d00213debe9eb 3 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=60b13eb0105f73aaea85731cbb842970767792c23b1464b7301d00213debe9eb 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qW6 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qW6 00:10:58.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.qW6 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 80820 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 80820 ']' 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.135 05:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:58.394 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.394 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:58.394 05:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 80852 /var/tmp/host.sock 00:10:58.394 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 80852 ']' 00:10:58.394 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:58.394 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.394 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:58.394 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.394 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.654 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.654 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:58.654 05:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:58.654 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.654 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.654 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.654 05:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:58.654 05:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KFI 00:10:58.654 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.654 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.654 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.654 05:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.KFI 00:10:58.654 05:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.KFI 00:10:58.912 05:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.SY8 ]] 00:10:58.912 05:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SY8 00:10:58.912 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.912 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.912 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.912 05:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SY8 00:10:58.912 05:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SY8 00:10:59.171 05:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:59.171 05:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.y2Q 00:10:59.171 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.171 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.171 05:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.171 05:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.y2Q 00:10:59.171 05:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.y2Q 00:10:59.429 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ww2 ]] 00:10:59.429 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ww2 00:10:59.429 05:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.429 05:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.429 05:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.429 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ww2 00:10:59.429 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ww2 00:10:59.687 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:59.687 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fL1 00:10:59.687 05:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.687 05:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.687 05:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.687 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.fL1 00:10:59.687 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.fL1 00:10:59.944 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.KT8 ]] 00:10:59.944 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KT8 00:10:59.944 05:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.944 05:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.944 05:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.944 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KT8 00:10:59.944 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KT8 00:11:00.202 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:00.202 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qW6 00:11:00.202 05:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.202 05:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.202 05:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.202 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.qW6 00:11:00.202 05:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.qW6 00:11:00.461 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:11:00.461 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:00.461 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:00.461 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:00.461 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:00.461 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:00.719 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:11:00.719 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.719 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:00.719 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:00.719 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:00.719 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.719 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.719 05:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.719 05:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.719 05:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.719 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.719 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.977 00:11:00.977 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.977 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.977 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.238 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.238 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.238 05:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.238 05:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.238 05:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.238 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.238 { 00:11:01.238 "cntlid": 1, 00:11:01.238 "qid": 0, 00:11:01.238 "state": "enabled", 00:11:01.238 "thread": "nvmf_tgt_poll_group_000", 00:11:01.238 "listen_address": { 00:11:01.238 "trtype": "TCP", 00:11:01.238 "adrfam": "IPv4", 00:11:01.238 "traddr": "10.0.0.2", 00:11:01.238 "trsvcid": "4420" 00:11:01.238 }, 00:11:01.238 "peer_address": { 00:11:01.238 "trtype": "TCP", 00:11:01.238 "adrfam": "IPv4", 00:11:01.238 "traddr": "10.0.0.1", 00:11:01.238 "trsvcid": "45304" 00:11:01.238 }, 00:11:01.238 "auth": { 00:11:01.238 "state": "completed", 00:11:01.238 "digest": "sha256", 00:11:01.238 "dhgroup": "null" 00:11:01.238 } 00:11:01.238 } 00:11:01.238 ]' 00:11:01.238 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.238 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:01.238 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.498 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:01.498 05:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.498 05:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.498 05:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.498 05:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.757 05:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:11:07.054 05:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.054 05:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:07.054 05:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.054 05:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.054 05:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.054 05:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:07.054 05:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:07.054 05:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.054 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.054 { 00:11:07.054 "cntlid": 3, 00:11:07.054 "qid": 0, 00:11:07.054 "state": "enabled", 00:11:07.054 "thread": "nvmf_tgt_poll_group_000", 00:11:07.054 "listen_address": { 00:11:07.054 "trtype": "TCP", 00:11:07.054 "adrfam": "IPv4", 00:11:07.054 "traddr": "10.0.0.2", 00:11:07.054 "trsvcid": "4420" 00:11:07.054 }, 00:11:07.054 "peer_address": { 00:11:07.054 "trtype": "TCP", 00:11:07.054 "adrfam": "IPv4", 00:11:07.054 "traddr": "10.0.0.1", 00:11:07.054 "trsvcid": "51672" 00:11:07.054 }, 00:11:07.054 "auth": { 00:11:07.054 "state": "completed", 00:11:07.054 "digest": "sha256", 00:11:07.054 "dhgroup": "null" 00:11:07.054 } 00:11:07.054 } 00:11:07.054 ]' 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.054 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.331 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:07.331 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.331 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.331 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.331 05:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.604 05:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:11:08.181 05:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.181 05:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:08.181 05:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.181 05:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.181 05:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.181 05:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.181 05:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:08.181 05:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:08.440 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:11:08.440 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.440 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:08.440 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:08.440 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:08.440 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.440 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.440 05:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.440 05:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.440 05:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.440 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.440 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.698 00:11:08.698 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:08.698 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.699 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:08.957 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.957 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.957 05:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.957 05:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.957 05:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.957 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:08.957 { 00:11:08.957 "cntlid": 5, 00:11:08.957 "qid": 0, 00:11:08.957 "state": "enabled", 00:11:08.957 "thread": "nvmf_tgt_poll_group_000", 00:11:08.957 "listen_address": { 00:11:08.957 "trtype": "TCP", 00:11:08.957 "adrfam": "IPv4", 00:11:08.957 "traddr": "10.0.0.2", 00:11:08.957 "trsvcid": "4420" 00:11:08.957 }, 00:11:08.957 "peer_address": { 00:11:08.957 "trtype": "TCP", 00:11:08.957 "adrfam": "IPv4", 00:11:08.957 "traddr": "10.0.0.1", 00:11:08.957 "trsvcid": "51706" 00:11:08.957 }, 00:11:08.957 "auth": { 00:11:08.957 "state": "completed", 00:11:08.957 "digest": "sha256", 00:11:08.957 "dhgroup": "null" 00:11:08.957 } 00:11:08.957 } 00:11:08.957 ]' 00:11:09.217 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.217 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.217 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.217 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:09.217 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.217 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.217 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.217 05:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.475 05:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:11:10.041 05:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.041 05:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:10.041 05:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.041 05:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.041 05:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.041 05:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.041 05:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:10.041 05:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:10.300 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:11:10.300 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.300 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:10.300 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:10.300 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:10.300 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.300 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:11:10.300 05:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.300 05:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.559 05:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.559 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:10.559 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:10.817 00:11:10.817 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:10.817 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.817 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.077 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.077 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.077 05:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.077 05:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.077 05:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.077 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.077 { 00:11:11.077 "cntlid": 7, 00:11:11.077 "qid": 0, 00:11:11.077 "state": "enabled", 00:11:11.077 "thread": "nvmf_tgt_poll_group_000", 00:11:11.077 "listen_address": { 00:11:11.077 "trtype": "TCP", 00:11:11.077 "adrfam": "IPv4", 00:11:11.077 "traddr": "10.0.0.2", 00:11:11.077 "trsvcid": "4420" 00:11:11.077 }, 00:11:11.077 "peer_address": { 00:11:11.077 "trtype": "TCP", 00:11:11.077 "adrfam": "IPv4", 00:11:11.077 "traddr": "10.0.0.1", 00:11:11.077 "trsvcid": "51728" 00:11:11.077 }, 00:11:11.077 "auth": { 00:11:11.077 "state": "completed", 00:11:11.077 "digest": "sha256", 00:11:11.077 "dhgroup": "null" 00:11:11.077 } 00:11:11.077 } 00:11:11.077 ]' 00:11:11.077 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.077 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.077 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.077 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:11.077 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.077 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.077 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.077 05:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.643 05:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:11:12.209 05:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.209 05:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:12.209 05:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.209 05:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.209 05:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.209 05:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:12.209 05:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.209 05:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:12.209 05:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:12.468 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:11:12.468 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.468 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:12.468 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:12.468 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:12.468 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.468 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.468 05:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.468 05:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.468 05:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.468 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.468 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.728 00:11:12.728 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:12.728 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:12.728 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.988 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.988 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.988 05:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.988 05:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.988 05:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.988 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:12.988 { 00:11:12.988 "cntlid": 9, 00:11:12.988 "qid": 0, 00:11:12.988 "state": "enabled", 00:11:12.988 "thread": "nvmf_tgt_poll_group_000", 00:11:12.988 "listen_address": { 00:11:12.988 "trtype": "TCP", 00:11:12.988 "adrfam": "IPv4", 00:11:12.988 "traddr": "10.0.0.2", 00:11:12.988 "trsvcid": "4420" 00:11:12.988 }, 00:11:12.988 "peer_address": { 00:11:12.988 "trtype": "TCP", 00:11:12.988 "adrfam": "IPv4", 00:11:12.988 "traddr": "10.0.0.1", 00:11:12.988 "trsvcid": "51752" 00:11:12.988 }, 00:11:12.988 "auth": { 00:11:12.988 "state": "completed", 00:11:12.988 "digest": "sha256", 00:11:12.988 "dhgroup": "ffdhe2048" 00:11:12.988 } 00:11:12.988 } 00:11:12.988 ]' 00:11:12.988 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:12.988 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:12.988 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.247 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:13.247 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.247 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.247 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.247 05:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.506 05:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:11:14.076 05:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.076 05:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:14.076 05:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.076 05:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.076 05:58:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.076 05:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.076 05:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:14.076 05:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:14.335 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:11:14.335 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.335 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:14.335 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:14.335 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:14.335 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.335 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.335 05:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.335 05:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.335 05:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.335 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.335 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.902 00:11:14.902 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:14.902 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:14.902 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.161 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.161 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.161 05:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.161 05:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.161 05:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.161 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.161 { 00:11:15.161 "cntlid": 11, 00:11:15.161 "qid": 0, 00:11:15.161 "state": "enabled", 00:11:15.161 "thread": "nvmf_tgt_poll_group_000", 00:11:15.161 "listen_address": { 00:11:15.161 "trtype": "TCP", 00:11:15.161 "adrfam": "IPv4", 00:11:15.161 "traddr": "10.0.0.2", 00:11:15.161 "trsvcid": "4420" 00:11:15.161 }, 00:11:15.161 "peer_address": { 00:11:15.161 "trtype": "TCP", 00:11:15.161 "adrfam": "IPv4", 00:11:15.161 "traddr": "10.0.0.1", 00:11:15.161 "trsvcid": "51780" 00:11:15.161 }, 00:11:15.161 "auth": { 00:11:15.161 "state": "completed", 00:11:15.161 "digest": "sha256", 00:11:15.161 "dhgroup": "ffdhe2048" 00:11:15.161 } 00:11:15.161 } 00:11:15.161 ]' 00:11:15.161 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.161 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.161 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.161 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:15.161 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.161 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.161 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.161 05:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.419 05:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.354 05:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.354 05:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.354 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.354 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.613 00:11:16.613 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:16.613 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:16.613 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.871 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.871 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.871 05:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.871 05:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.871 05:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.871 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:16.871 { 00:11:16.871 "cntlid": 13, 00:11:16.871 "qid": 0, 00:11:16.871 "state": "enabled", 00:11:16.871 "thread": "nvmf_tgt_poll_group_000", 00:11:16.871 "listen_address": { 00:11:16.871 "trtype": "TCP", 00:11:16.871 "adrfam": "IPv4", 00:11:16.871 "traddr": "10.0.0.2", 00:11:16.871 "trsvcid": "4420" 00:11:16.871 }, 00:11:16.871 "peer_address": { 00:11:16.871 "trtype": "TCP", 00:11:16.871 "adrfam": "IPv4", 00:11:16.871 "traddr": "10.0.0.1", 00:11:16.871 "trsvcid": "44170" 00:11:16.871 }, 00:11:16.871 "auth": { 00:11:16.871 "state": "completed", 00:11:16.871 "digest": "sha256", 00:11:16.871 "dhgroup": "ffdhe2048" 00:11:16.871 } 00:11:16.871 } 00:11:16.871 ]' 00:11:16.871 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.130 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.130 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.130 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:17.130 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.130 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.130 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.130 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.389 05:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:11:17.956 05:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.956 05:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:17.956 05:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.956 05:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.957 05:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.957 05:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:17.957 05:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:17.957 05:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:18.216 05:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:11:18.216 05:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.216 05:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:18.216 05:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:18.216 05:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:18.216 05:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.216 05:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:11:18.216 05:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.216 05:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.216 05:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.216 05:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:18.216 05:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:18.475 00:11:18.735 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:18.735 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:18.735 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.735 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.735 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.735 05:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.735 05:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.735 05:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.735 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:18.735 { 00:11:18.735 "cntlid": 15, 00:11:18.735 "qid": 0, 00:11:18.735 "state": "enabled", 00:11:18.735 "thread": "nvmf_tgt_poll_group_000", 00:11:18.735 "listen_address": { 00:11:18.735 "trtype": "TCP", 00:11:18.735 "adrfam": "IPv4", 00:11:18.735 "traddr": "10.0.0.2", 00:11:18.735 "trsvcid": "4420" 00:11:18.735 }, 00:11:18.735 "peer_address": { 00:11:18.735 "trtype": "TCP", 00:11:18.735 "adrfam": "IPv4", 00:11:18.735 "traddr": "10.0.0.1", 00:11:18.735 "trsvcid": "44192" 00:11:18.735 }, 00:11:18.735 "auth": { 00:11:18.735 "state": "completed", 00:11:18.735 "digest": "sha256", 00:11:18.735 "dhgroup": "ffdhe2048" 00:11:18.735 } 00:11:18.735 } 00:11:18.735 ]' 00:11:18.735 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:18.994 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:18.994 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:18.994 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:18.994 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:18.994 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.994 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.994 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.252 05:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:11:19.820 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.820 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:19.820 05:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.820 05:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.080 05:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.080 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:20.080 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.080 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:20.080 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:20.339 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:11:20.339 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.339 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:20.339 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:20.339 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:20.339 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.339 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.339 05:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.339 05:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.339 05:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.339 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.339 05:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.599 00:11:20.599 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:20.599 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:20.599 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.858 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.858 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.858 05:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.858 05:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.858 05:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.858 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:20.858 { 00:11:20.858 "cntlid": 17, 00:11:20.858 "qid": 0, 00:11:20.858 "state": "enabled", 00:11:20.858 "thread": "nvmf_tgt_poll_group_000", 00:11:20.858 "listen_address": { 00:11:20.858 "trtype": "TCP", 00:11:20.858 "adrfam": "IPv4", 00:11:20.858 "traddr": "10.0.0.2", 00:11:20.858 "trsvcid": "4420" 00:11:20.858 }, 00:11:20.858 "peer_address": { 00:11:20.858 "trtype": "TCP", 00:11:20.858 "adrfam": "IPv4", 00:11:20.858 "traddr": "10.0.0.1", 00:11:20.858 "trsvcid": "44210" 00:11:20.858 }, 00:11:20.858 "auth": { 00:11:20.858 "state": "completed", 00:11:20.858 "digest": "sha256", 00:11:20.858 "dhgroup": "ffdhe3072" 00:11:20.858 } 00:11:20.858 } 00:11:20.858 ]' 00:11:20.858 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:20.858 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.858 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.144 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:21.144 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.144 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.144 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.144 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.426 05:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:11:21.990 05:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.990 05:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:21.990 05:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.990 05:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.990 05:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.990 05:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:21.990 05:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:21.990 05:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:22.248 05:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:11:22.248 05:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.248 05:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:22.248 05:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:22.248 05:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:22.248 05:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.248 05:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.248 05:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.248 05:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.248 05:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.248 05:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.248 05:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.505 00:11:22.505 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:22.505 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:22.505 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.763 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.763 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.763 05:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.763 05:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.763 05:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.763 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:22.763 { 00:11:22.763 "cntlid": 19, 00:11:22.763 "qid": 0, 00:11:22.763 "state": "enabled", 00:11:22.763 "thread": "nvmf_tgt_poll_group_000", 00:11:22.763 "listen_address": { 00:11:22.763 "trtype": "TCP", 00:11:22.763 "adrfam": "IPv4", 00:11:22.763 "traddr": "10.0.0.2", 00:11:22.763 "trsvcid": "4420" 00:11:22.763 }, 00:11:22.763 "peer_address": { 00:11:22.763 "trtype": "TCP", 00:11:22.763 "adrfam": "IPv4", 00:11:22.763 "traddr": "10.0.0.1", 00:11:22.763 "trsvcid": "44252" 00:11:22.763 }, 00:11:22.763 "auth": { 00:11:22.763 "state": "completed", 00:11:22.763 "digest": "sha256", 00:11:22.763 "dhgroup": "ffdhe3072" 00:11:22.763 } 00:11:22.763 } 00:11:22.763 ]' 00:11:22.763 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:22.764 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:22.764 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.022 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:23.022 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.022 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.022 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.022 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.280 05:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:11:23.845 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.845 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:23.845 05:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.845 05:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.845 05:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.845 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:23.845 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:23.845 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:24.103 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:11:24.103 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.103 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:24.103 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:24.103 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:24.103 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.103 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.103 05:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.103 05:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.103 05:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.103 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.103 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.361 00:11:24.361 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:24.361 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.361 05:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:24.619 05:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.619 05:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.619 05:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.619 05:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.619 05:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.619 05:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:24.619 { 00:11:24.619 "cntlid": 21, 00:11:24.619 "qid": 0, 00:11:24.619 "state": "enabled", 00:11:24.619 "thread": "nvmf_tgt_poll_group_000", 00:11:24.619 "listen_address": { 00:11:24.619 "trtype": "TCP", 00:11:24.619 "adrfam": "IPv4", 00:11:24.619 "traddr": "10.0.0.2", 00:11:24.619 "trsvcid": "4420" 00:11:24.619 }, 00:11:24.619 "peer_address": { 00:11:24.619 "trtype": "TCP", 00:11:24.619 "adrfam": "IPv4", 00:11:24.619 "traddr": "10.0.0.1", 00:11:24.619 "trsvcid": "44276" 00:11:24.619 }, 00:11:24.619 "auth": { 00:11:24.619 "state": "completed", 00:11:24.619 "digest": "sha256", 00:11:24.619 "dhgroup": "ffdhe3072" 00:11:24.619 } 00:11:24.619 } 00:11:24.619 ]' 00:11:24.619 05:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:24.619 05:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.619 05:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:24.877 05:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:24.877 05:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.877 05:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.877 05:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.877 05:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.135 05:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.703 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.962 00:11:26.220 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:26.220 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:26.220 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.478 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.478 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.478 05:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.478 05:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.478 05:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.478 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:26.478 { 00:11:26.478 "cntlid": 23, 00:11:26.478 "qid": 0, 00:11:26.478 "state": "enabled", 00:11:26.478 "thread": "nvmf_tgt_poll_group_000", 00:11:26.478 "listen_address": { 00:11:26.478 "trtype": "TCP", 00:11:26.478 "adrfam": "IPv4", 00:11:26.478 "traddr": "10.0.0.2", 00:11:26.478 "trsvcid": "4420" 00:11:26.478 }, 00:11:26.478 "peer_address": { 00:11:26.478 "trtype": "TCP", 00:11:26.478 "adrfam": "IPv4", 00:11:26.478 "traddr": "10.0.0.1", 00:11:26.478 "trsvcid": "38342" 00:11:26.478 }, 00:11:26.478 "auth": { 00:11:26.478 "state": "completed", 00:11:26.478 "digest": "sha256", 00:11:26.478 "dhgroup": "ffdhe3072" 00:11:26.478 } 00:11:26.478 } 00:11:26.478 ]' 00:11:26.478 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:26.478 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.478 05:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.478 05:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:26.478 05:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.478 05:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.478 05:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.478 05:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.736 05:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:11:27.302 05:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.302 05:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:27.302 05:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.302 05:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.302 05:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.302 05:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:27.302 05:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.302 05:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:27.302 05:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:27.560 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:11:27.560 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.560 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:27.560 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:27.560 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:27.560 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.560 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.560 05:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.560 05:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.560 05:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.560 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.561 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.818 00:11:27.818 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.818 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.818 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.076 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.076 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.076 05:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.076 05:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.076 05:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.076 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:28.076 { 00:11:28.076 "cntlid": 25, 00:11:28.076 "qid": 0, 00:11:28.076 "state": "enabled", 00:11:28.076 "thread": "nvmf_tgt_poll_group_000", 00:11:28.076 "listen_address": { 00:11:28.076 "trtype": "TCP", 00:11:28.076 "adrfam": "IPv4", 00:11:28.076 "traddr": "10.0.0.2", 00:11:28.076 "trsvcid": "4420" 00:11:28.076 }, 00:11:28.076 "peer_address": { 00:11:28.076 "trtype": "TCP", 00:11:28.076 "adrfam": "IPv4", 00:11:28.076 "traddr": "10.0.0.1", 00:11:28.076 "trsvcid": "38364" 00:11:28.076 }, 00:11:28.076 "auth": { 00:11:28.076 "state": "completed", 00:11:28.076 "digest": "sha256", 00:11:28.076 "dhgroup": "ffdhe4096" 00:11:28.076 } 00:11:28.076 } 00:11:28.076 ]' 00:11:28.076 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:28.076 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.076 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.334 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:28.334 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:28.334 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.334 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.334 05:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.592 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:11:29.157 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.157 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:29.157 05:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.157 05:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.157 05:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.157 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.157 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:29.157 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:29.417 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:11:29.417 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.417 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:29.417 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:29.417 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:29.417 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.417 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.417 05:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.417 05:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.417 05:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.417 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.417 05:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.676 00:11:29.676 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.676 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.676 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.935 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.935 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.935 05:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.935 05:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.935 05:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.935 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.935 { 00:11:29.935 "cntlid": 27, 00:11:29.935 "qid": 0, 00:11:29.935 "state": "enabled", 00:11:29.935 "thread": "nvmf_tgt_poll_group_000", 00:11:29.935 "listen_address": { 00:11:29.935 "trtype": "TCP", 00:11:29.935 "adrfam": "IPv4", 00:11:29.935 "traddr": "10.0.0.2", 00:11:29.935 "trsvcid": "4420" 00:11:29.935 }, 00:11:29.935 "peer_address": { 00:11:29.935 "trtype": "TCP", 00:11:29.935 "adrfam": "IPv4", 00:11:29.935 "traddr": "10.0.0.1", 00:11:29.935 "trsvcid": "38396" 00:11:29.935 }, 00:11:29.935 "auth": { 00:11:29.935 "state": "completed", 00:11:29.935 "digest": "sha256", 00:11:29.935 "dhgroup": "ffdhe4096" 00:11:29.935 } 00:11:29.935 } 00:11:29.935 ]' 00:11:29.935 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.935 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:29.935 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:29.935 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:29.935 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.195 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.195 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.195 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.454 05:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:11:31.022 05:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.022 05:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:31.022 05:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.022 05:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.022 05:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.022 05:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.022 05:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:31.022 05:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:31.281 05:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:11:31.281 05:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.281 05:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:31.281 05:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:31.281 05:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:31.281 05:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.281 05:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.281 05:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.281 05:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.281 05:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.281 05:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.281 05:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.539 00:11:31.539 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.539 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.539 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.796 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.796 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.797 05:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.797 05:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.797 05:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.797 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.797 { 00:11:31.797 "cntlid": 29, 00:11:31.797 "qid": 0, 00:11:31.797 "state": "enabled", 00:11:31.797 "thread": "nvmf_tgt_poll_group_000", 00:11:31.797 "listen_address": { 00:11:31.797 "trtype": "TCP", 00:11:31.797 "adrfam": "IPv4", 00:11:31.797 "traddr": "10.0.0.2", 00:11:31.797 "trsvcid": "4420" 00:11:31.797 }, 00:11:31.797 "peer_address": { 00:11:31.797 "trtype": "TCP", 00:11:31.797 "adrfam": "IPv4", 00:11:31.797 "traddr": "10.0.0.1", 00:11:31.797 "trsvcid": "38428" 00:11:31.797 }, 00:11:31.797 "auth": { 00:11:31.797 "state": "completed", 00:11:31.797 "digest": "sha256", 00:11:31.797 "dhgroup": "ffdhe4096" 00:11:31.797 } 00:11:31.797 } 00:11:31.797 ]' 00:11:31.797 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.797 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.797 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:31.797 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:31.797 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.054 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.054 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.055 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.313 05:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:32.882 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:33.450 00:11:33.450 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.450 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:33.450 05:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.450 05:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.450 05:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.450 05:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.450 05:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.450 05:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.450 05:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.450 { 00:11:33.450 "cntlid": 31, 00:11:33.450 "qid": 0, 00:11:33.450 "state": "enabled", 00:11:33.450 "thread": "nvmf_tgt_poll_group_000", 00:11:33.450 "listen_address": { 00:11:33.450 "trtype": "TCP", 00:11:33.450 "adrfam": "IPv4", 00:11:33.450 "traddr": "10.0.0.2", 00:11:33.450 "trsvcid": "4420" 00:11:33.450 }, 00:11:33.450 "peer_address": { 00:11:33.450 "trtype": "TCP", 00:11:33.450 "adrfam": "IPv4", 00:11:33.450 "traddr": "10.0.0.1", 00:11:33.450 "trsvcid": "38454" 00:11:33.450 }, 00:11:33.450 "auth": { 00:11:33.450 "state": "completed", 00:11:33.450 "digest": "sha256", 00:11:33.450 "dhgroup": "ffdhe4096" 00:11:33.450 } 00:11:33.450 } 00:11:33.450 ]' 00:11:33.450 05:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.708 05:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.708 05:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.708 05:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:33.708 05:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.708 05:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.708 05:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.708 05:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.968 05:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:11:34.537 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.537 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:34.537 05:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.537 05:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.537 05:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.537 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:34.537 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:34.537 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:34.537 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:34.796 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:34.796 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:34.796 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:34.796 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:34.796 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:34.796 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.796 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.796 05:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.796 05:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.796 05:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.796 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.796 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.055 00:11:35.055 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:35.055 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:35.055 05:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.314 05:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.314 05:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.314 05:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.314 05:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.573 05:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.573 05:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.573 { 00:11:35.573 "cntlid": 33, 00:11:35.573 "qid": 0, 00:11:35.573 "state": "enabled", 00:11:35.573 "thread": "nvmf_tgt_poll_group_000", 00:11:35.573 "listen_address": { 00:11:35.573 "trtype": "TCP", 00:11:35.573 "adrfam": "IPv4", 00:11:35.573 "traddr": "10.0.0.2", 00:11:35.573 "trsvcid": "4420" 00:11:35.573 }, 00:11:35.573 "peer_address": { 00:11:35.573 "trtype": "TCP", 00:11:35.573 "adrfam": "IPv4", 00:11:35.573 "traddr": "10.0.0.1", 00:11:35.573 "trsvcid": "49626" 00:11:35.573 }, 00:11:35.573 "auth": { 00:11:35.573 "state": "completed", 00:11:35.573 "digest": "sha256", 00:11:35.573 "dhgroup": "ffdhe6144" 00:11:35.573 } 00:11:35.573 } 00:11:35.573 ]' 00:11:35.573 05:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.573 05:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:35.573 05:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.573 05:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:35.573 05:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.573 05:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.573 05:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.573 05:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.831 05:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:11:36.399 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.399 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:36.399 05:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.399 05:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.399 05:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.399 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.399 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:36.399 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:36.658 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:36.658 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.658 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:36.658 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:36.658 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:36.658 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.658 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.658 05:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.658 05:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.658 05:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.658 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.658 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.225 00:11:37.225 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:37.225 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:37.225 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.484 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.484 05:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.484 05:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.484 05:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.484 05:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.484 05:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.484 { 00:11:37.484 "cntlid": 35, 00:11:37.484 "qid": 0, 00:11:37.484 "state": "enabled", 00:11:37.484 "thread": "nvmf_tgt_poll_group_000", 00:11:37.484 "listen_address": { 00:11:37.484 "trtype": "TCP", 00:11:37.484 "adrfam": "IPv4", 00:11:37.484 "traddr": "10.0.0.2", 00:11:37.484 "trsvcid": "4420" 00:11:37.484 }, 00:11:37.484 "peer_address": { 00:11:37.484 "trtype": "TCP", 00:11:37.484 "adrfam": "IPv4", 00:11:37.484 "traddr": "10.0.0.1", 00:11:37.484 "trsvcid": "49664" 00:11:37.484 }, 00:11:37.484 "auth": { 00:11:37.484 "state": "completed", 00:11:37.484 "digest": "sha256", 00:11:37.484 "dhgroup": "ffdhe6144" 00:11:37.484 } 00:11:37.484 } 00:11:37.484 ]' 00:11:37.484 05:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.484 05:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.484 05:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.484 05:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:37.484 05:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.484 05:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.484 05:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.484 05:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.742 05:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:11:38.308 05:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.308 05:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:38.308 05:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.308 05:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.308 05:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.308 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.308 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:38.308 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:38.567 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:38.567 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.567 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:38.567 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:38.567 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:38.567 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.567 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.567 05:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.567 05:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.567 05:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.567 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.567 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.133 00:11:39.133 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:39.133 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.133 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.391 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.391 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.391 05:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.391 05:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.391 05:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.391 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.391 { 00:11:39.391 "cntlid": 37, 00:11:39.391 "qid": 0, 00:11:39.391 "state": "enabled", 00:11:39.391 "thread": "nvmf_tgt_poll_group_000", 00:11:39.391 "listen_address": { 00:11:39.391 "trtype": "TCP", 00:11:39.391 "adrfam": "IPv4", 00:11:39.391 "traddr": "10.0.0.2", 00:11:39.391 "trsvcid": "4420" 00:11:39.391 }, 00:11:39.391 "peer_address": { 00:11:39.391 "trtype": "TCP", 00:11:39.391 "adrfam": "IPv4", 00:11:39.392 "traddr": "10.0.0.1", 00:11:39.392 "trsvcid": "49678" 00:11:39.392 }, 00:11:39.392 "auth": { 00:11:39.392 "state": "completed", 00:11:39.392 "digest": "sha256", 00:11:39.392 "dhgroup": "ffdhe6144" 00:11:39.392 } 00:11:39.392 } 00:11:39.392 ]' 00:11:39.392 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.392 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:39.392 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.392 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:39.392 05:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.392 05:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.392 05:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.392 05:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.651 05:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:11:40.219 05:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.219 05:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:40.219 05:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.219 05:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.219 05:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.219 05:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.219 05:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:40.219 05:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:40.505 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:40.505 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.505 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:40.505 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:40.505 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:40.505 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.505 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:11:40.505 05:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.505 05:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.505 05:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.505 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:40.505 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:41.093 00:11:41.093 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.093 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:41.093 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.351 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.351 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.351 05:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.351 05:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.351 05:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.351 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.351 { 00:11:41.351 "cntlid": 39, 00:11:41.351 "qid": 0, 00:11:41.351 "state": "enabled", 00:11:41.351 "thread": "nvmf_tgt_poll_group_000", 00:11:41.351 "listen_address": { 00:11:41.351 "trtype": "TCP", 00:11:41.351 "adrfam": "IPv4", 00:11:41.351 "traddr": "10.0.0.2", 00:11:41.351 "trsvcid": "4420" 00:11:41.351 }, 00:11:41.351 "peer_address": { 00:11:41.351 "trtype": "TCP", 00:11:41.351 "adrfam": "IPv4", 00:11:41.351 "traddr": "10.0.0.1", 00:11:41.351 "trsvcid": "49704" 00:11:41.351 }, 00:11:41.351 "auth": { 00:11:41.351 "state": "completed", 00:11:41.351 "digest": "sha256", 00:11:41.351 "dhgroup": "ffdhe6144" 00:11:41.351 } 00:11:41.351 } 00:11:41.351 ]' 00:11:41.351 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.351 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:41.351 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.351 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:41.351 05:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.351 05:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.351 05:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.351 05:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.609 05:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:11:42.175 05:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.175 05:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:42.175 05:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.175 05:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.175 05:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.175 05:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:42.175 05:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.175 05:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:42.175 05:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:42.434 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:42.434 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.434 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:42.434 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:42.434 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:42.434 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.434 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.434 05:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.434 05:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.693 05:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.693 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.693 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.260 00:11:43.260 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.260 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.260 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.260 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.260 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.260 05:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.260 05:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.519 05:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.519 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.519 { 00:11:43.519 "cntlid": 41, 00:11:43.519 "qid": 0, 00:11:43.519 "state": "enabled", 00:11:43.519 "thread": "nvmf_tgt_poll_group_000", 00:11:43.519 "listen_address": { 00:11:43.519 "trtype": "TCP", 00:11:43.519 "adrfam": "IPv4", 00:11:43.519 "traddr": "10.0.0.2", 00:11:43.519 "trsvcid": "4420" 00:11:43.519 }, 00:11:43.519 "peer_address": { 00:11:43.519 "trtype": "TCP", 00:11:43.519 "adrfam": "IPv4", 00:11:43.519 "traddr": "10.0.0.1", 00:11:43.519 "trsvcid": "49728" 00:11:43.519 }, 00:11:43.519 "auth": { 00:11:43.519 "state": "completed", 00:11:43.519 "digest": "sha256", 00:11:43.519 "dhgroup": "ffdhe8192" 00:11:43.519 } 00:11:43.519 } 00:11:43.519 ]' 00:11:43.519 05:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.519 05:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:43.519 05:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.519 05:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:43.519 05:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.519 05:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.519 05:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.519 05:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.777 05:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:11:44.343 05:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.344 05:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:44.344 05:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.344 05:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.344 05:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.344 05:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.344 05:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:44.344 05:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:44.602 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:44.602 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.602 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:44.602 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:44.602 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:44.602 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.602 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.602 05:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.602 05:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.602 05:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.602 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.602 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.168 00:11:45.168 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.168 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.168 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.426 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.426 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.426 05:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.426 05:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.426 05:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.426 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.426 { 00:11:45.426 "cntlid": 43, 00:11:45.426 "qid": 0, 00:11:45.426 "state": "enabled", 00:11:45.426 "thread": "nvmf_tgt_poll_group_000", 00:11:45.426 "listen_address": { 00:11:45.426 "trtype": "TCP", 00:11:45.426 "adrfam": "IPv4", 00:11:45.426 "traddr": "10.0.0.2", 00:11:45.426 "trsvcid": "4420" 00:11:45.426 }, 00:11:45.426 "peer_address": { 00:11:45.426 "trtype": "TCP", 00:11:45.426 "adrfam": "IPv4", 00:11:45.426 "traddr": "10.0.0.1", 00:11:45.426 "trsvcid": "34388" 00:11:45.426 }, 00:11:45.426 "auth": { 00:11:45.426 "state": "completed", 00:11:45.426 "digest": "sha256", 00:11:45.426 "dhgroup": "ffdhe8192" 00:11:45.426 } 00:11:45.426 } 00:11:45.426 ]' 00:11:45.426 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.426 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:45.426 05:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.426 05:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:45.426 05:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.426 05:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.426 05:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.426 05:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.685 05:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:11:46.251 05:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.251 05:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:46.251 05:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.251 05:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.251 05:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.251 05:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.251 05:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:46.251 05:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:46.510 05:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:46.510 05:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.510 05:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:46.510 05:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:46.510 05:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:46.510 05:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.510 05:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.510 05:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.510 05:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.510 05:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.510 05:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.510 05:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.076 00:11:47.076 05:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.076 05:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.076 05:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.334 05:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.334 05:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.334 05:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.334 05:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.334 05:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.334 05:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.334 { 00:11:47.334 "cntlid": 45, 00:11:47.334 "qid": 0, 00:11:47.334 "state": "enabled", 00:11:47.334 "thread": "nvmf_tgt_poll_group_000", 00:11:47.334 "listen_address": { 00:11:47.334 "trtype": "TCP", 00:11:47.334 "adrfam": "IPv4", 00:11:47.334 "traddr": "10.0.0.2", 00:11:47.334 "trsvcid": "4420" 00:11:47.334 }, 00:11:47.334 "peer_address": { 00:11:47.334 "trtype": "TCP", 00:11:47.334 "adrfam": "IPv4", 00:11:47.334 "traddr": "10.0.0.1", 00:11:47.334 "trsvcid": "34398" 00:11:47.334 }, 00:11:47.335 "auth": { 00:11:47.335 "state": "completed", 00:11:47.335 "digest": "sha256", 00:11:47.335 "dhgroup": "ffdhe8192" 00:11:47.335 } 00:11:47.335 } 00:11:47.335 ]' 00:11:47.335 05:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.593 05:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:47.593 05:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.593 05:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:47.593 05:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.593 05:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.593 05:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.593 05:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.851 05:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:11:48.417 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.417 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:48.417 05:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.417 05:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.417 05:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.417 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.417 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:48.417 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:48.675 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:48.675 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.675 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:48.675 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:48.675 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:48.675 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.675 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:11:48.675 05:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.675 05:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.675 05:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.675 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:48.675 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:49.242 00:11:49.242 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.242 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.242 05:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.500 05:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.500 05:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.500 05:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.500 05:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.500 05:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.500 05:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.500 { 00:11:49.500 "cntlid": 47, 00:11:49.500 "qid": 0, 00:11:49.500 "state": "enabled", 00:11:49.500 "thread": "nvmf_tgt_poll_group_000", 00:11:49.500 "listen_address": { 00:11:49.500 "trtype": "TCP", 00:11:49.500 "adrfam": "IPv4", 00:11:49.500 "traddr": "10.0.0.2", 00:11:49.500 "trsvcid": "4420" 00:11:49.500 }, 00:11:49.500 "peer_address": { 00:11:49.500 "trtype": "TCP", 00:11:49.500 "adrfam": "IPv4", 00:11:49.500 "traddr": "10.0.0.1", 00:11:49.500 "trsvcid": "34424" 00:11:49.500 }, 00:11:49.500 "auth": { 00:11:49.500 "state": "completed", 00:11:49.500 "digest": "sha256", 00:11:49.500 "dhgroup": "ffdhe8192" 00:11:49.500 } 00:11:49.500 } 00:11:49.500 ]' 00:11:49.500 05:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.758 05:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:49.758 05:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.758 05:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:49.758 05:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.758 05:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.758 05:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.758 05:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.015 05:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:11:50.579 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.579 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:50.579 05:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.579 05:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.579 05:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.579 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:50.579 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:50.579 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.579 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:50.579 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:50.836 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:50.836 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.836 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:50.836 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:50.836 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:50.836 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.836 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.836 05:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.836 05:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.836 05:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.836 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.836 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.094 00:11:51.094 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.094 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.094 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.352 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.352 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.352 05:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.352 05:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.352 05:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.352 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.352 { 00:11:51.352 "cntlid": 49, 00:11:51.352 "qid": 0, 00:11:51.352 "state": "enabled", 00:11:51.352 "thread": "nvmf_tgt_poll_group_000", 00:11:51.352 "listen_address": { 00:11:51.352 "trtype": "TCP", 00:11:51.352 "adrfam": "IPv4", 00:11:51.352 "traddr": "10.0.0.2", 00:11:51.352 "trsvcid": "4420" 00:11:51.352 }, 00:11:51.352 "peer_address": { 00:11:51.352 "trtype": "TCP", 00:11:51.352 "adrfam": "IPv4", 00:11:51.352 "traddr": "10.0.0.1", 00:11:51.352 "trsvcid": "34438" 00:11:51.352 }, 00:11:51.352 "auth": { 00:11:51.352 "state": "completed", 00:11:51.352 "digest": "sha384", 00:11:51.352 "dhgroup": "null" 00:11:51.352 } 00:11:51.352 } 00:11:51.352 ]' 00:11:51.352 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.352 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.352 05:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.352 05:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:51.352 05:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.352 05:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.352 05:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.352 05:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.920 05:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:11:52.487 05:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.487 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:52.487 05:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.487 05:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.487 05:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.487 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.487 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:52.487 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:52.745 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:52.745 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.745 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:52.745 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:52.745 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:52.745 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.745 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.745 05:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.745 05:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.745 05:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.745 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.745 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.004 00:11:53.004 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.004 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.004 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.263 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.263 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.263 05:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.263 05:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.263 05:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.263 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.263 { 00:11:53.263 "cntlid": 51, 00:11:53.263 "qid": 0, 00:11:53.263 "state": "enabled", 00:11:53.263 "thread": "nvmf_tgt_poll_group_000", 00:11:53.264 "listen_address": { 00:11:53.264 "trtype": "TCP", 00:11:53.264 "adrfam": "IPv4", 00:11:53.264 "traddr": "10.0.0.2", 00:11:53.264 "trsvcid": "4420" 00:11:53.264 }, 00:11:53.264 "peer_address": { 00:11:53.264 "trtype": "TCP", 00:11:53.264 "adrfam": "IPv4", 00:11:53.264 "traddr": "10.0.0.1", 00:11:53.264 "trsvcid": "34472" 00:11:53.264 }, 00:11:53.264 "auth": { 00:11:53.264 "state": "completed", 00:11:53.264 "digest": "sha384", 00:11:53.264 "dhgroup": "null" 00:11:53.264 } 00:11:53.264 } 00:11:53.264 ]' 00:11:53.264 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.264 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.264 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.264 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:53.264 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.264 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.264 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.264 05:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.523 05:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:11:54.458 05:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.458 05:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:54.458 05:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.458 05:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.458 05:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.458 05:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.458 05:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:54.458 05:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:54.458 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:54.458 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.458 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:54.458 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:54.458 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:54.458 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.458 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.458 05:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.458 05:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.458 05:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.458 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.458 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.717 00:11:54.717 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.717 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:54.717 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.975 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.975 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.975 05:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.975 05:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.233 05:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.233 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.233 { 00:11:55.233 "cntlid": 53, 00:11:55.233 "qid": 0, 00:11:55.233 "state": "enabled", 00:11:55.233 "thread": "nvmf_tgt_poll_group_000", 00:11:55.233 "listen_address": { 00:11:55.233 "trtype": "TCP", 00:11:55.233 "adrfam": "IPv4", 00:11:55.233 "traddr": "10.0.0.2", 00:11:55.233 "trsvcid": "4420" 00:11:55.233 }, 00:11:55.233 "peer_address": { 00:11:55.233 "trtype": "TCP", 00:11:55.233 "adrfam": "IPv4", 00:11:55.233 "traddr": "10.0.0.1", 00:11:55.233 "trsvcid": "34752" 00:11:55.233 }, 00:11:55.233 "auth": { 00:11:55.233 "state": "completed", 00:11:55.233 "digest": "sha384", 00:11:55.233 "dhgroup": "null" 00:11:55.233 } 00:11:55.233 } 00:11:55.233 ]' 00:11:55.233 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.233 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.233 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.233 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:55.233 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.233 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.233 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.233 05:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.492 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:11:56.059 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.059 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:56.059 05:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.059 05:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.059 05:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.059 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.059 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:56.059 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:56.317 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:56.317 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.317 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:56.317 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:56.317 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:56.317 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.317 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:11:56.317 05:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.317 05:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.317 05:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.318 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:56.318 05:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:56.576 00:11:56.576 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:56.576 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:56.576 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.835 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.836 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.836 05:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.836 05:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.836 05:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.836 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:56.836 { 00:11:56.836 "cntlid": 55, 00:11:56.836 "qid": 0, 00:11:56.836 "state": "enabled", 00:11:56.836 "thread": "nvmf_tgt_poll_group_000", 00:11:56.836 "listen_address": { 00:11:56.836 "trtype": "TCP", 00:11:56.836 "adrfam": "IPv4", 00:11:56.836 "traddr": "10.0.0.2", 00:11:56.836 "trsvcid": "4420" 00:11:56.836 }, 00:11:56.836 "peer_address": { 00:11:56.836 "trtype": "TCP", 00:11:56.836 "adrfam": "IPv4", 00:11:56.836 "traddr": "10.0.0.1", 00:11:56.836 "trsvcid": "34776" 00:11:56.836 }, 00:11:56.836 "auth": { 00:11:56.836 "state": "completed", 00:11:56.836 "digest": "sha384", 00:11:56.836 "dhgroup": "null" 00:11:56.836 } 00:11:56.836 } 00:11:56.836 ]' 00:11:56.836 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:56.836 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:56.836 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:56.836 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:56.836 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.095 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.095 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.095 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.353 05:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:11:57.920 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.920 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:57.920 05:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.920 05:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.920 05:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.920 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:57.920 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:57.920 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:57.920 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:58.178 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:58.178 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.178 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:58.178 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:58.178 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:58.178 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.178 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.178 05:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.178 05:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.178 05:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.178 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.178 05:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.437 00:11:58.437 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:58.437 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:58.437 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.695 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.695 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.695 05:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.695 05:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.695 05:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.695 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:58.695 { 00:11:58.695 "cntlid": 57, 00:11:58.695 "qid": 0, 00:11:58.695 "state": "enabled", 00:11:58.696 "thread": "nvmf_tgt_poll_group_000", 00:11:58.696 "listen_address": { 00:11:58.696 "trtype": "TCP", 00:11:58.696 "adrfam": "IPv4", 00:11:58.696 "traddr": "10.0.0.2", 00:11:58.696 "trsvcid": "4420" 00:11:58.696 }, 00:11:58.696 "peer_address": { 00:11:58.696 "trtype": "TCP", 00:11:58.696 "adrfam": "IPv4", 00:11:58.696 "traddr": "10.0.0.1", 00:11:58.696 "trsvcid": "34792" 00:11:58.696 }, 00:11:58.696 "auth": { 00:11:58.696 "state": "completed", 00:11:58.696 "digest": "sha384", 00:11:58.696 "dhgroup": "ffdhe2048" 00:11:58.696 } 00:11:58.696 } 00:11:58.696 ]' 00:11:58.696 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:58.696 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:58.696 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:58.954 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:58.954 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:58.954 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.954 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.954 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.213 05:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:11:59.778 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.778 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:11:59.778 05:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.778 05:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.778 05:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.778 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:59.778 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:59.778 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:00.037 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:12:00.037 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.037 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:00.037 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:00.037 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:00.037 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.037 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.037 05:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.037 05:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.037 05:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.037 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.037 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.299 00:12:00.299 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.299 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.299 05:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.560 05:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.561 05:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.561 05:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.561 05:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.561 05:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.561 05:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:00.561 { 00:12:00.561 "cntlid": 59, 00:12:00.561 "qid": 0, 00:12:00.561 "state": "enabled", 00:12:00.561 "thread": "nvmf_tgt_poll_group_000", 00:12:00.561 "listen_address": { 00:12:00.561 "trtype": "TCP", 00:12:00.561 "adrfam": "IPv4", 00:12:00.561 "traddr": "10.0.0.2", 00:12:00.561 "trsvcid": "4420" 00:12:00.561 }, 00:12:00.561 "peer_address": { 00:12:00.561 "trtype": "TCP", 00:12:00.561 "adrfam": "IPv4", 00:12:00.561 "traddr": "10.0.0.1", 00:12:00.561 "trsvcid": "34818" 00:12:00.561 }, 00:12:00.561 "auth": { 00:12:00.561 "state": "completed", 00:12:00.561 "digest": "sha384", 00:12:00.561 "dhgroup": "ffdhe2048" 00:12:00.561 } 00:12:00.561 } 00:12:00.561 ]' 00:12:00.561 05:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:00.561 05:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:00.561 05:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.561 05:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:00.561 05:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.819 05:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.819 05:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.819 05:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.077 05:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:12:01.657 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.657 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:01.657 05:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.657 05:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.657 05:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.657 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:01.657 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:01.657 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:01.915 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:12:01.915 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.915 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:01.915 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:01.915 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:01.915 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.915 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.915 05:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.915 05:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.915 05:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.915 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.915 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.174 00:12:02.174 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.174 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.174 05:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.740 05:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.740 05:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.740 05:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.740 05:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.740 05:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.740 05:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.740 { 00:12:02.740 "cntlid": 61, 00:12:02.740 "qid": 0, 00:12:02.740 "state": "enabled", 00:12:02.740 "thread": "nvmf_tgt_poll_group_000", 00:12:02.740 "listen_address": { 00:12:02.740 "trtype": "TCP", 00:12:02.740 "adrfam": "IPv4", 00:12:02.740 "traddr": "10.0.0.2", 00:12:02.740 "trsvcid": "4420" 00:12:02.740 }, 00:12:02.740 "peer_address": { 00:12:02.740 "trtype": "TCP", 00:12:02.740 "adrfam": "IPv4", 00:12:02.740 "traddr": "10.0.0.1", 00:12:02.740 "trsvcid": "34842" 00:12:02.740 }, 00:12:02.740 "auth": { 00:12:02.740 "state": "completed", 00:12:02.740 "digest": "sha384", 00:12:02.740 "dhgroup": "ffdhe2048" 00:12:02.740 } 00:12:02.740 } 00:12:02.740 ]' 00:12:02.740 05:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.740 05:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.740 05:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.740 05:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:02.740 05:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.740 05:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.740 05:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.740 05:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.998 05:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.935 05:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.936 05:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.936 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:03.936 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:04.195 00:12:04.453 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.453 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.453 05:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.712 05:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.712 05:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.712 05:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.712 05:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.712 05:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.713 05:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.713 { 00:12:04.713 "cntlid": 63, 00:12:04.713 "qid": 0, 00:12:04.713 "state": "enabled", 00:12:04.713 "thread": "nvmf_tgt_poll_group_000", 00:12:04.713 "listen_address": { 00:12:04.713 "trtype": "TCP", 00:12:04.713 "adrfam": "IPv4", 00:12:04.713 "traddr": "10.0.0.2", 00:12:04.713 "trsvcid": "4420" 00:12:04.713 }, 00:12:04.713 "peer_address": { 00:12:04.713 "trtype": "TCP", 00:12:04.713 "adrfam": "IPv4", 00:12:04.713 "traddr": "10.0.0.1", 00:12:04.713 "trsvcid": "34870" 00:12:04.713 }, 00:12:04.713 "auth": { 00:12:04.713 "state": "completed", 00:12:04.713 "digest": "sha384", 00:12:04.713 "dhgroup": "ffdhe2048" 00:12:04.713 } 00:12:04.713 } 00:12:04.713 ]' 00:12:04.713 05:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.713 05:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.713 05:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.713 05:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:04.713 05:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.713 05:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.713 05:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.713 05:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.971 05:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:12:05.539 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.539 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:05.539 05:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.539 05:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.539 05:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.539 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:05.539 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.539 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:05.539 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:05.798 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:12:05.798 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:05.798 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:05.798 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:05.798 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:05.798 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.798 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.798 05:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.798 05:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.798 05:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.798 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.798 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.058 00:12:06.317 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.317 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.317 05:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.576 05:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.576 05:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.576 05:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.576 05:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.576 05:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.576 05:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.576 { 00:12:06.576 "cntlid": 65, 00:12:06.576 "qid": 0, 00:12:06.576 "state": "enabled", 00:12:06.576 "thread": "nvmf_tgt_poll_group_000", 00:12:06.576 "listen_address": { 00:12:06.576 "trtype": "TCP", 00:12:06.576 "adrfam": "IPv4", 00:12:06.576 "traddr": "10.0.0.2", 00:12:06.576 "trsvcid": "4420" 00:12:06.576 }, 00:12:06.576 "peer_address": { 00:12:06.576 "trtype": "TCP", 00:12:06.576 "adrfam": "IPv4", 00:12:06.576 "traddr": "10.0.0.1", 00:12:06.576 "trsvcid": "52050" 00:12:06.576 }, 00:12:06.576 "auth": { 00:12:06.576 "state": "completed", 00:12:06.576 "digest": "sha384", 00:12:06.576 "dhgroup": "ffdhe3072" 00:12:06.576 } 00:12:06.576 } 00:12:06.576 ]' 00:12:06.576 05:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.576 05:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.576 05:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.576 05:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:06.576 05:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.576 05:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.576 05:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.576 05:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.835 05:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:12:07.400 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.400 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:07.400 05:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.400 05:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.400 05:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.400 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.400 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:07.400 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:07.966 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:12:07.966 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.966 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:07.966 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:07.966 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:07.966 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.966 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.966 05:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.966 05:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.966 05:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.966 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.966 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.225 00:12:08.225 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.225 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.225 05:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.484 05:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.484 05:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.484 05:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.484 05:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.484 05:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.484 05:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.484 { 00:12:08.484 "cntlid": 67, 00:12:08.484 "qid": 0, 00:12:08.484 "state": "enabled", 00:12:08.484 "thread": "nvmf_tgt_poll_group_000", 00:12:08.484 "listen_address": { 00:12:08.484 "trtype": "TCP", 00:12:08.484 "adrfam": "IPv4", 00:12:08.484 "traddr": "10.0.0.2", 00:12:08.484 "trsvcid": "4420" 00:12:08.484 }, 00:12:08.484 "peer_address": { 00:12:08.484 "trtype": "TCP", 00:12:08.484 "adrfam": "IPv4", 00:12:08.484 "traddr": "10.0.0.1", 00:12:08.484 "trsvcid": "52092" 00:12:08.484 }, 00:12:08.484 "auth": { 00:12:08.484 "state": "completed", 00:12:08.484 "digest": "sha384", 00:12:08.484 "dhgroup": "ffdhe3072" 00:12:08.484 } 00:12:08.484 } 00:12:08.484 ]' 00:12:08.484 05:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.484 05:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:08.484 05:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.484 05:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:08.484 05:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.742 05:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.742 05:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.742 05:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.999 05:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:12:09.565 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.565 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:09.565 05:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.565 05:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.565 05:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.565 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.565 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:09.565 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:09.825 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:12:09.825 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.825 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:09.825 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:09.825 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:09.825 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.825 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.825 05:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.825 05:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.825 05:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.825 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.825 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.083 00:12:10.083 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.083 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.083 05:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.342 05:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.342 05:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.342 05:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.342 05:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.342 05:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.342 05:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.342 { 00:12:10.342 "cntlid": 69, 00:12:10.342 "qid": 0, 00:12:10.342 "state": "enabled", 00:12:10.342 "thread": "nvmf_tgt_poll_group_000", 00:12:10.342 "listen_address": { 00:12:10.342 "trtype": "TCP", 00:12:10.342 "adrfam": "IPv4", 00:12:10.342 "traddr": "10.0.0.2", 00:12:10.342 "trsvcid": "4420" 00:12:10.342 }, 00:12:10.342 "peer_address": { 00:12:10.342 "trtype": "TCP", 00:12:10.342 "adrfam": "IPv4", 00:12:10.342 "traddr": "10.0.0.1", 00:12:10.342 "trsvcid": "52104" 00:12:10.342 }, 00:12:10.342 "auth": { 00:12:10.342 "state": "completed", 00:12:10.342 "digest": "sha384", 00:12:10.342 "dhgroup": "ffdhe3072" 00:12:10.342 } 00:12:10.342 } 00:12:10.342 ]' 00:12:10.342 05:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.342 05:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.342 05:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.600 05:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:10.601 05:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.601 05:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.601 05:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.601 05:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.859 05:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:12:11.424 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.424 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:11.424 05:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.424 05:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.424 05:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.424 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.424 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:11.424 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:11.682 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:11.682 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.682 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:11.682 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:11.682 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:11.682 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.682 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:12:11.682 05:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.682 05:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.682 05:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.682 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:11.682 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:12.249 00:12:12.249 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.249 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.249 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.249 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.249 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.249 05:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.249 05:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.508 05:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.508 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.508 { 00:12:12.508 "cntlid": 71, 00:12:12.508 "qid": 0, 00:12:12.508 "state": "enabled", 00:12:12.508 "thread": "nvmf_tgt_poll_group_000", 00:12:12.508 "listen_address": { 00:12:12.508 "trtype": "TCP", 00:12:12.508 "adrfam": "IPv4", 00:12:12.508 "traddr": "10.0.0.2", 00:12:12.508 "trsvcid": "4420" 00:12:12.508 }, 00:12:12.508 "peer_address": { 00:12:12.508 "trtype": "TCP", 00:12:12.508 "adrfam": "IPv4", 00:12:12.508 "traddr": "10.0.0.1", 00:12:12.508 "trsvcid": "52122" 00:12:12.508 }, 00:12:12.508 "auth": { 00:12:12.508 "state": "completed", 00:12:12.508 "digest": "sha384", 00:12:12.508 "dhgroup": "ffdhe3072" 00:12:12.508 } 00:12:12.508 } 00:12:12.508 ]' 00:12:12.508 05:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.508 05:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.508 05:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.508 05:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:12.508 05:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.508 05:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.508 05:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.508 05:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.765 05:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:12:13.699 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.699 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:13.699 05:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.699 05:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.699 05:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.699 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:13.699 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.699 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:13.700 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:13.700 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:12:13.700 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.700 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:13.700 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:13.700 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:13.700 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.700 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.700 05:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.700 05:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.700 05:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.700 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.700 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.265 00:12:14.265 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.265 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.265 05:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.524 05:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.524 05:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.524 05:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.524 05:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.524 05:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.524 05:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.524 { 00:12:14.524 "cntlid": 73, 00:12:14.524 "qid": 0, 00:12:14.524 "state": "enabled", 00:12:14.524 "thread": "nvmf_tgt_poll_group_000", 00:12:14.524 "listen_address": { 00:12:14.524 "trtype": "TCP", 00:12:14.524 "adrfam": "IPv4", 00:12:14.524 "traddr": "10.0.0.2", 00:12:14.524 "trsvcid": "4420" 00:12:14.524 }, 00:12:14.524 "peer_address": { 00:12:14.524 "trtype": "TCP", 00:12:14.524 "adrfam": "IPv4", 00:12:14.524 "traddr": "10.0.0.1", 00:12:14.524 "trsvcid": "52152" 00:12:14.524 }, 00:12:14.524 "auth": { 00:12:14.524 "state": "completed", 00:12:14.524 "digest": "sha384", 00:12:14.524 "dhgroup": "ffdhe4096" 00:12:14.524 } 00:12:14.524 } 00:12:14.524 ]' 00:12:14.524 05:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.524 05:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.524 05:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.524 05:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:14.524 05:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.524 05:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.524 05:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.524 05:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.782 05:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.726 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.013 00:12:16.013 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.013 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.013 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.272 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.272 05:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.272 05:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.272 05:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.531 05:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.531 05:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.531 { 00:12:16.531 "cntlid": 75, 00:12:16.531 "qid": 0, 00:12:16.531 "state": "enabled", 00:12:16.531 "thread": "nvmf_tgt_poll_group_000", 00:12:16.531 "listen_address": { 00:12:16.531 "trtype": "TCP", 00:12:16.531 "adrfam": "IPv4", 00:12:16.531 "traddr": "10.0.0.2", 00:12:16.531 "trsvcid": "4420" 00:12:16.531 }, 00:12:16.531 "peer_address": { 00:12:16.531 "trtype": "TCP", 00:12:16.531 "adrfam": "IPv4", 00:12:16.531 "traddr": "10.0.0.1", 00:12:16.531 "trsvcid": "37790" 00:12:16.531 }, 00:12:16.531 "auth": { 00:12:16.531 "state": "completed", 00:12:16.531 "digest": "sha384", 00:12:16.531 "dhgroup": "ffdhe4096" 00:12:16.531 } 00:12:16.531 } 00:12:16.531 ]' 00:12:16.531 05:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.531 05:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.531 05:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.531 05:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:16.531 05:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.531 05:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.531 05:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.531 05:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.789 05:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:12:17.720 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.720 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:17.720 05:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.720 05:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.720 05:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.720 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.720 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:17.721 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:17.721 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:12:17.721 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.721 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:17.721 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:17.721 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:17.721 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.721 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.721 05:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.721 05:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.721 05:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.721 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.721 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.978 00:12:18.237 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.237 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.237 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.496 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.496 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.496 05:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.496 05:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.496 05:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.496 05:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.496 { 00:12:18.496 "cntlid": 77, 00:12:18.496 "qid": 0, 00:12:18.496 "state": "enabled", 00:12:18.496 "thread": "nvmf_tgt_poll_group_000", 00:12:18.496 "listen_address": { 00:12:18.496 "trtype": "TCP", 00:12:18.496 "adrfam": "IPv4", 00:12:18.496 "traddr": "10.0.0.2", 00:12:18.496 "trsvcid": "4420" 00:12:18.496 }, 00:12:18.496 "peer_address": { 00:12:18.496 "trtype": "TCP", 00:12:18.496 "adrfam": "IPv4", 00:12:18.496 "traddr": "10.0.0.1", 00:12:18.496 "trsvcid": "37826" 00:12:18.496 }, 00:12:18.496 "auth": { 00:12:18.496 "state": "completed", 00:12:18.496 "digest": "sha384", 00:12:18.496 "dhgroup": "ffdhe4096" 00:12:18.496 } 00:12:18.496 } 00:12:18.496 ]' 00:12:18.496 05:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.496 05:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:18.496 05:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.496 05:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:18.496 05:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.496 05:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.496 05:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.496 05:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.755 05:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:19.713 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:19.996 00:12:20.261 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.261 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.261 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.521 05:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.521 05:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.521 05:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.521 05:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.521 05:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.521 05:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.521 { 00:12:20.521 "cntlid": 79, 00:12:20.521 "qid": 0, 00:12:20.521 "state": "enabled", 00:12:20.521 "thread": "nvmf_tgt_poll_group_000", 00:12:20.521 "listen_address": { 00:12:20.521 "trtype": "TCP", 00:12:20.521 "adrfam": "IPv4", 00:12:20.521 "traddr": "10.0.0.2", 00:12:20.521 "trsvcid": "4420" 00:12:20.521 }, 00:12:20.521 "peer_address": { 00:12:20.521 "trtype": "TCP", 00:12:20.521 "adrfam": "IPv4", 00:12:20.521 "traddr": "10.0.0.1", 00:12:20.521 "trsvcid": "37854" 00:12:20.521 }, 00:12:20.521 "auth": { 00:12:20.521 "state": "completed", 00:12:20.521 "digest": "sha384", 00:12:20.521 "dhgroup": "ffdhe4096" 00:12:20.521 } 00:12:20.521 } 00:12:20.521 ]' 00:12:20.521 05:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.521 05:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:20.521 05:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.521 05:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:20.521 05:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.521 05:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.521 05:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.521 05:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.780 05:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:12:21.348 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.348 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:21.348 05:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.348 05:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.348 05:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.348 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:21.348 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.348 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:21.348 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:21.607 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:12:21.607 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.607 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:21.607 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:21.607 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:21.607 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.607 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.607 05:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.607 05:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.607 05:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.607 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.607 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.174 00:12:22.174 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:22.174 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.174 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.174 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.174 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.174 05:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.174 05:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.174 05:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.174 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:22.174 { 00:12:22.174 "cntlid": 81, 00:12:22.174 "qid": 0, 00:12:22.174 "state": "enabled", 00:12:22.174 "thread": "nvmf_tgt_poll_group_000", 00:12:22.174 "listen_address": { 00:12:22.174 "trtype": "TCP", 00:12:22.174 "adrfam": "IPv4", 00:12:22.174 "traddr": "10.0.0.2", 00:12:22.174 "trsvcid": "4420" 00:12:22.174 }, 00:12:22.174 "peer_address": { 00:12:22.174 "trtype": "TCP", 00:12:22.174 "adrfam": "IPv4", 00:12:22.174 "traddr": "10.0.0.1", 00:12:22.174 "trsvcid": "37874" 00:12:22.174 }, 00:12:22.174 "auth": { 00:12:22.174 "state": "completed", 00:12:22.174 "digest": "sha384", 00:12:22.174 "dhgroup": "ffdhe6144" 00:12:22.174 } 00:12:22.174 } 00:12:22.174 ]' 00:12:22.174 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.432 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:22.432 05:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.432 05:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:22.432 05:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.432 05:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.432 05:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.432 05:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.691 05:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:12:23.259 05:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.259 05:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:23.259 05:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.259 05:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.259 05:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.259 05:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.259 05:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:23.259 05:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:23.519 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:12:23.519 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.519 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:23.519 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:23.519 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:23.519 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.519 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.519 05:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.519 05:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.519 05:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.519 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.519 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.778 00:12:24.036 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.036 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.036 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.036 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.036 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.036 05:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.036 05:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.036 05:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.036 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.036 { 00:12:24.036 "cntlid": 83, 00:12:24.037 "qid": 0, 00:12:24.037 "state": "enabled", 00:12:24.037 "thread": "nvmf_tgt_poll_group_000", 00:12:24.037 "listen_address": { 00:12:24.037 "trtype": "TCP", 00:12:24.037 "adrfam": "IPv4", 00:12:24.037 "traddr": "10.0.0.2", 00:12:24.037 "trsvcid": "4420" 00:12:24.037 }, 00:12:24.037 "peer_address": { 00:12:24.037 "trtype": "TCP", 00:12:24.037 "adrfam": "IPv4", 00:12:24.037 "traddr": "10.0.0.1", 00:12:24.037 "trsvcid": "37902" 00:12:24.037 }, 00:12:24.037 "auth": { 00:12:24.037 "state": "completed", 00:12:24.037 "digest": "sha384", 00:12:24.037 "dhgroup": "ffdhe6144" 00:12:24.037 } 00:12:24.037 } 00:12:24.037 ]' 00:12:24.037 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.296 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:24.296 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.296 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:24.296 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.296 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.296 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.296 05:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.554 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:12:25.122 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.122 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:25.122 05:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.122 05:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.123 05:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.123 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:25.123 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:25.123 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:25.382 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:12:25.382 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.382 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:25.382 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:25.382 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:25.382 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.382 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.382 05:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.382 05:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.382 05:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.382 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.382 05:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.641 00:12:25.641 05:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.641 05:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.641 05:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.900 05:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.900 05:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.900 05:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.900 05:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.900 05:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.900 05:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.900 { 00:12:25.900 "cntlid": 85, 00:12:25.900 "qid": 0, 00:12:25.900 "state": "enabled", 00:12:25.900 "thread": "nvmf_tgt_poll_group_000", 00:12:25.900 "listen_address": { 00:12:25.900 "trtype": "TCP", 00:12:25.900 "adrfam": "IPv4", 00:12:25.900 "traddr": "10.0.0.2", 00:12:25.900 "trsvcid": "4420" 00:12:25.900 }, 00:12:25.900 "peer_address": { 00:12:25.900 "trtype": "TCP", 00:12:25.900 "adrfam": "IPv4", 00:12:25.900 "traddr": "10.0.0.1", 00:12:25.900 "trsvcid": "32884" 00:12:25.900 }, 00:12:25.900 "auth": { 00:12:25.900 "state": "completed", 00:12:25.900 "digest": "sha384", 00:12:25.900 "dhgroup": "ffdhe6144" 00:12:25.900 } 00:12:25.900 } 00:12:25.900 ]' 00:12:25.900 05:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.159 05:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:26.159 05:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.159 05:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:26.159 05:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.159 05:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.159 05:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.159 05:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.419 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:12:26.987 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.987 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:26.987 05:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.987 05:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:27.245 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:27.813 00:12:27.813 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.813 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.813 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.072 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.072 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.072 05:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.072 05:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.072 05:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.072 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.072 { 00:12:28.072 "cntlid": 87, 00:12:28.072 "qid": 0, 00:12:28.072 "state": "enabled", 00:12:28.072 "thread": "nvmf_tgt_poll_group_000", 00:12:28.072 "listen_address": { 00:12:28.072 "trtype": "TCP", 00:12:28.072 "adrfam": "IPv4", 00:12:28.072 "traddr": "10.0.0.2", 00:12:28.072 "trsvcid": "4420" 00:12:28.072 }, 00:12:28.072 "peer_address": { 00:12:28.072 "trtype": "TCP", 00:12:28.072 "adrfam": "IPv4", 00:12:28.072 "traddr": "10.0.0.1", 00:12:28.072 "trsvcid": "32910" 00:12:28.072 }, 00:12:28.072 "auth": { 00:12:28.072 "state": "completed", 00:12:28.072 "digest": "sha384", 00:12:28.072 "dhgroup": "ffdhe6144" 00:12:28.072 } 00:12:28.072 } 00:12:28.072 ]' 00:12:28.072 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.072 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:28.072 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.072 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:28.072 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.072 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.072 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.072 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.640 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:12:29.207 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.207 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:29.207 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.207 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.207 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.207 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:29.207 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.207 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:29.207 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:29.466 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:12:29.466 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.466 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:29.466 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:29.466 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:29.466 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.466 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.466 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.466 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.466 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.466 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.466 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.033 00:12:30.033 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.033 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.033 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.292 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.292 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.292 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.292 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.292 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.292 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.292 { 00:12:30.292 "cntlid": 89, 00:12:30.292 "qid": 0, 00:12:30.292 "state": "enabled", 00:12:30.292 "thread": "nvmf_tgt_poll_group_000", 00:12:30.292 "listen_address": { 00:12:30.292 "trtype": "TCP", 00:12:30.292 "adrfam": "IPv4", 00:12:30.292 "traddr": "10.0.0.2", 00:12:30.292 "trsvcid": "4420" 00:12:30.292 }, 00:12:30.292 "peer_address": { 00:12:30.292 "trtype": "TCP", 00:12:30.292 "adrfam": "IPv4", 00:12:30.292 "traddr": "10.0.0.1", 00:12:30.292 "trsvcid": "32932" 00:12:30.292 }, 00:12:30.292 "auth": { 00:12:30.292 "state": "completed", 00:12:30.292 "digest": "sha384", 00:12:30.292 "dhgroup": "ffdhe8192" 00:12:30.292 } 00:12:30.292 } 00:12:30.292 ]' 00:12:30.292 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.292 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:30.292 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.292 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:30.292 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.292 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.292 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.292 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.552 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:12:31.118 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.118 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:31.118 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.118 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.375 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.375 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.375 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:31.375 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:31.375 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:31.375 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.375 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:31.375 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:31.375 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:31.375 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.375 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.375 05:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.375 05:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.375 05:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.375 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.375 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.942 00:12:31.942 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.942 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.942 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.200 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.200 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.200 05:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.200 05:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.200 05:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.200 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:32.200 { 00:12:32.200 "cntlid": 91, 00:12:32.200 "qid": 0, 00:12:32.200 "state": "enabled", 00:12:32.200 "thread": "nvmf_tgt_poll_group_000", 00:12:32.200 "listen_address": { 00:12:32.200 "trtype": "TCP", 00:12:32.200 "adrfam": "IPv4", 00:12:32.200 "traddr": "10.0.0.2", 00:12:32.200 "trsvcid": "4420" 00:12:32.200 }, 00:12:32.200 "peer_address": { 00:12:32.200 "trtype": "TCP", 00:12:32.200 "adrfam": "IPv4", 00:12:32.200 "traddr": "10.0.0.1", 00:12:32.200 "trsvcid": "32950" 00:12:32.200 }, 00:12:32.200 "auth": { 00:12:32.200 "state": "completed", 00:12:32.200 "digest": "sha384", 00:12:32.200 "dhgroup": "ffdhe8192" 00:12:32.200 } 00:12:32.200 } 00:12:32.200 ]' 00:12:32.200 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:32.459 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:32.459 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:32.459 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:32.459 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:32.459 05:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.459 05:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.459 05:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.716 05:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:12:33.282 05:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.282 05:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:33.282 05:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.282 05:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.282 05:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.282 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.283 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:33.283 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:33.540 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:33.540 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:33.540 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:33.540 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:33.540 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:33.540 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.540 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.540 05:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.540 05:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.799 05:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.799 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.799 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.365 00:12:34.365 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.365 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.365 05:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.624 05:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.624 05:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.624 05:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.624 05:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.624 05:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.624 05:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.624 { 00:12:34.624 "cntlid": 93, 00:12:34.624 "qid": 0, 00:12:34.624 "state": "enabled", 00:12:34.624 "thread": "nvmf_tgt_poll_group_000", 00:12:34.624 "listen_address": { 00:12:34.624 "trtype": "TCP", 00:12:34.624 "adrfam": "IPv4", 00:12:34.624 "traddr": "10.0.0.2", 00:12:34.624 "trsvcid": "4420" 00:12:34.624 }, 00:12:34.624 "peer_address": { 00:12:34.624 "trtype": "TCP", 00:12:34.624 "adrfam": "IPv4", 00:12:34.624 "traddr": "10.0.0.1", 00:12:34.624 "trsvcid": "32970" 00:12:34.624 }, 00:12:34.624 "auth": { 00:12:34.624 "state": "completed", 00:12:34.624 "digest": "sha384", 00:12:34.624 "dhgroup": "ffdhe8192" 00:12:34.624 } 00:12:34.624 } 00:12:34.624 ]' 00:12:34.624 05:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.624 05:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:34.624 05:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.624 05:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:34.624 05:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.624 05:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.624 05:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.624 05:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.883 05:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:12:35.452 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.452 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:35.452 05:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.452 05:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.452 05:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.452 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.452 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:35.452 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:35.710 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:35.710 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.710 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:35.710 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:35.710 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:35.710 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.710 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:12:35.710 05:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.710 05:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.710 05:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.710 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:35.710 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:36.277 00:12:36.277 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.277 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.277 05:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.536 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.536 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.536 05:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.536 05:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.536 05:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.536 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.536 { 00:12:36.536 "cntlid": 95, 00:12:36.536 "qid": 0, 00:12:36.536 "state": "enabled", 00:12:36.536 "thread": "nvmf_tgt_poll_group_000", 00:12:36.536 "listen_address": { 00:12:36.536 "trtype": "TCP", 00:12:36.536 "adrfam": "IPv4", 00:12:36.536 "traddr": "10.0.0.2", 00:12:36.536 "trsvcid": "4420" 00:12:36.536 }, 00:12:36.536 "peer_address": { 00:12:36.536 "trtype": "TCP", 00:12:36.536 "adrfam": "IPv4", 00:12:36.536 "traddr": "10.0.0.1", 00:12:36.536 "trsvcid": "57522" 00:12:36.536 }, 00:12:36.536 "auth": { 00:12:36.536 "state": "completed", 00:12:36.536 "digest": "sha384", 00:12:36.536 "dhgroup": "ffdhe8192" 00:12:36.536 } 00:12:36.536 } 00:12:36.536 ]' 00:12:36.536 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.536 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.794 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.794 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:36.794 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.794 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.794 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.794 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.053 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:12:37.620 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.620 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:37.620 05:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.620 05:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.620 05:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.620 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:37.620 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:37.620 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:37.620 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:37.620 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:37.879 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:37.880 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.880 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:37.880 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:37.880 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:37.880 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.880 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.880 05:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.880 05:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.880 05:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.880 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.880 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.140 00:12:38.140 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.140 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.140 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.400 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.400 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.400 05:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.400 05:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.400 05:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.400 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.400 { 00:12:38.400 "cntlid": 97, 00:12:38.400 "qid": 0, 00:12:38.400 "state": "enabled", 00:12:38.400 "thread": "nvmf_tgt_poll_group_000", 00:12:38.400 "listen_address": { 00:12:38.400 "trtype": "TCP", 00:12:38.400 "adrfam": "IPv4", 00:12:38.400 "traddr": "10.0.0.2", 00:12:38.400 "trsvcid": "4420" 00:12:38.400 }, 00:12:38.400 "peer_address": { 00:12:38.400 "trtype": "TCP", 00:12:38.400 "adrfam": "IPv4", 00:12:38.400 "traddr": "10.0.0.1", 00:12:38.400 "trsvcid": "57558" 00:12:38.400 }, 00:12:38.400 "auth": { 00:12:38.400 "state": "completed", 00:12:38.400 "digest": "sha512", 00:12:38.400 "dhgroup": "null" 00:12:38.400 } 00:12:38.400 } 00:12:38.400 ]' 00:12:38.400 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.400 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.400 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.400 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:38.400 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.400 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.400 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.400 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.660 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.726 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.984 00:12:39.984 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.984 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.984 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.244 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.244 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.244 05:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.244 05:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.244 05:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.244 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.244 { 00:12:40.244 "cntlid": 99, 00:12:40.244 "qid": 0, 00:12:40.244 "state": "enabled", 00:12:40.244 "thread": "nvmf_tgt_poll_group_000", 00:12:40.244 "listen_address": { 00:12:40.244 "trtype": "TCP", 00:12:40.244 "adrfam": "IPv4", 00:12:40.244 "traddr": "10.0.0.2", 00:12:40.244 "trsvcid": "4420" 00:12:40.244 }, 00:12:40.244 "peer_address": { 00:12:40.244 "trtype": "TCP", 00:12:40.244 "adrfam": "IPv4", 00:12:40.244 "traddr": "10.0.0.1", 00:12:40.244 "trsvcid": "57582" 00:12:40.244 }, 00:12:40.244 "auth": { 00:12:40.244 "state": "completed", 00:12:40.244 "digest": "sha512", 00:12:40.244 "dhgroup": "null" 00:12:40.244 } 00:12:40.244 } 00:12:40.244 ]' 00:12:40.244 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.244 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.504 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.504 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:40.504 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.504 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.504 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.504 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.762 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:12:41.329 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.329 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:41.329 05:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.329 05:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.329 05:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.329 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.329 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:41.329 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:41.587 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:41.587 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.587 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:41.587 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:41.587 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:41.587 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.587 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.587 05:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.587 05:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.587 05:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.587 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.587 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.155 00:12:42.155 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.155 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.155 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.155 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.155 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.155 05:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.155 05:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.155 05:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.155 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.155 { 00:12:42.155 "cntlid": 101, 00:12:42.155 "qid": 0, 00:12:42.155 "state": "enabled", 00:12:42.155 "thread": "nvmf_tgt_poll_group_000", 00:12:42.155 "listen_address": { 00:12:42.155 "trtype": "TCP", 00:12:42.155 "adrfam": "IPv4", 00:12:42.155 "traddr": "10.0.0.2", 00:12:42.155 "trsvcid": "4420" 00:12:42.155 }, 00:12:42.155 "peer_address": { 00:12:42.155 "trtype": "TCP", 00:12:42.155 "adrfam": "IPv4", 00:12:42.155 "traddr": "10.0.0.1", 00:12:42.155 "trsvcid": "57610" 00:12:42.155 }, 00:12:42.155 "auth": { 00:12:42.155 "state": "completed", 00:12:42.155 "digest": "sha512", 00:12:42.155 "dhgroup": "null" 00:12:42.155 } 00:12:42.155 } 00:12:42.155 ]' 00:12:42.414 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.414 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.414 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.414 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:42.414 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.414 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.414 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.414 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.673 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:12:43.241 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.241 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:43.241 05:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.241 05:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.241 05:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.241 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:43.241 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:43.241 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:43.498 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:43.498 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.498 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:43.498 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:43.498 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:43.498 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.498 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:12:43.498 05:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.498 05:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.498 05:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.498 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.498 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.755 00:12:43.755 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.755 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.755 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.013 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.013 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.013 05:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.013 05:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.013 05:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.013 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:44.013 { 00:12:44.013 "cntlid": 103, 00:12:44.013 "qid": 0, 00:12:44.013 "state": "enabled", 00:12:44.013 "thread": "nvmf_tgt_poll_group_000", 00:12:44.013 "listen_address": { 00:12:44.013 "trtype": "TCP", 00:12:44.013 "adrfam": "IPv4", 00:12:44.013 "traddr": "10.0.0.2", 00:12:44.013 "trsvcid": "4420" 00:12:44.013 }, 00:12:44.013 "peer_address": { 00:12:44.013 "trtype": "TCP", 00:12:44.013 "adrfam": "IPv4", 00:12:44.013 "traddr": "10.0.0.1", 00:12:44.013 "trsvcid": "57634" 00:12:44.013 }, 00:12:44.013 "auth": { 00:12:44.013 "state": "completed", 00:12:44.013 "digest": "sha512", 00:12:44.013 "dhgroup": "null" 00:12:44.013 } 00:12:44.013 } 00:12:44.013 ]' 00:12:44.013 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:44.013 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:44.013 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:44.272 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:44.272 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:44.272 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.272 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.272 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.529 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:12:45.096 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.096 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:45.096 05:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.096 05:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.096 05:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.096 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:45.096 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:45.097 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:45.097 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:45.354 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:45.354 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:45.354 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:45.354 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:45.354 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:45.354 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.354 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.354 05:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.354 05:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.354 05:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.354 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.354 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.612 00:12:45.612 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.612 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.612 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.870 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.870 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.870 05:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.870 05:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.870 05:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.870 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.870 { 00:12:45.870 "cntlid": 105, 00:12:45.870 "qid": 0, 00:12:45.870 "state": "enabled", 00:12:45.870 "thread": "nvmf_tgt_poll_group_000", 00:12:45.870 "listen_address": { 00:12:45.870 "trtype": "TCP", 00:12:45.870 "adrfam": "IPv4", 00:12:45.870 "traddr": "10.0.0.2", 00:12:45.870 "trsvcid": "4420" 00:12:45.870 }, 00:12:45.870 "peer_address": { 00:12:45.870 "trtype": "TCP", 00:12:45.870 "adrfam": "IPv4", 00:12:45.870 "traddr": "10.0.0.1", 00:12:45.870 "trsvcid": "35470" 00:12:45.870 }, 00:12:45.870 "auth": { 00:12:45.870 "state": "completed", 00:12:45.870 "digest": "sha512", 00:12:45.870 "dhgroup": "ffdhe2048" 00:12:45.870 } 00:12:45.870 } 00:12:45.870 ]' 00:12:45.870 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.870 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.870 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.870 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:45.870 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.870 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.870 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.870 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.128 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.063 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.631 00:12:47.631 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.631 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.631 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.631 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.631 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.631 05:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.631 05:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.631 05:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.631 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.631 { 00:12:47.631 "cntlid": 107, 00:12:47.631 "qid": 0, 00:12:47.631 "state": "enabled", 00:12:47.631 "thread": "nvmf_tgt_poll_group_000", 00:12:47.631 "listen_address": { 00:12:47.631 "trtype": "TCP", 00:12:47.631 "adrfam": "IPv4", 00:12:47.631 "traddr": "10.0.0.2", 00:12:47.631 "trsvcid": "4420" 00:12:47.631 }, 00:12:47.631 "peer_address": { 00:12:47.631 "trtype": "TCP", 00:12:47.631 "adrfam": "IPv4", 00:12:47.631 "traddr": "10.0.0.1", 00:12:47.632 "trsvcid": "35506" 00:12:47.632 }, 00:12:47.632 "auth": { 00:12:47.632 "state": "completed", 00:12:47.632 "digest": "sha512", 00:12:47.632 "dhgroup": "ffdhe2048" 00:12:47.632 } 00:12:47.632 } 00:12:47.632 ]' 00:12:47.632 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.891 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.891 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.891 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:47.891 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.891 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.891 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.891 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.150 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:12:48.739 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.739 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:48.739 05:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.739 05:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.739 05:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.739 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.739 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:48.739 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:48.997 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:48.997 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.997 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:48.997 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:48.997 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:48.997 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.997 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.997 05:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.997 05:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.997 05:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.997 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.998 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.562 00:12:49.562 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.562 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.562 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.821 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.821 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.821 05:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.821 05:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.821 05:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.821 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.821 { 00:12:49.821 "cntlid": 109, 00:12:49.821 "qid": 0, 00:12:49.821 "state": "enabled", 00:12:49.821 "thread": "nvmf_tgt_poll_group_000", 00:12:49.821 "listen_address": { 00:12:49.821 "trtype": "TCP", 00:12:49.821 "adrfam": "IPv4", 00:12:49.821 "traddr": "10.0.0.2", 00:12:49.821 "trsvcid": "4420" 00:12:49.821 }, 00:12:49.821 "peer_address": { 00:12:49.821 "trtype": "TCP", 00:12:49.821 "adrfam": "IPv4", 00:12:49.821 "traddr": "10.0.0.1", 00:12:49.821 "trsvcid": "35530" 00:12:49.821 }, 00:12:49.821 "auth": { 00:12:49.821 "state": "completed", 00:12:49.821 "digest": "sha512", 00:12:49.821 "dhgroup": "ffdhe2048" 00:12:49.821 } 00:12:49.821 } 00:12:49.821 ]' 00:12:49.821 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.821 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.821 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:49.821 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:49.821 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:49.821 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.821 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.821 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.079 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.013 05:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.014 05:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.014 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:51.014 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:51.579 00:12:51.579 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:51.579 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:51.579 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.837 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.837 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.837 05:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.837 05:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.837 05:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.837 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.837 { 00:12:51.837 "cntlid": 111, 00:12:51.837 "qid": 0, 00:12:51.837 "state": "enabled", 00:12:51.837 "thread": "nvmf_tgt_poll_group_000", 00:12:51.837 "listen_address": { 00:12:51.837 "trtype": "TCP", 00:12:51.837 "adrfam": "IPv4", 00:12:51.837 "traddr": "10.0.0.2", 00:12:51.837 "trsvcid": "4420" 00:12:51.837 }, 00:12:51.837 "peer_address": { 00:12:51.837 "trtype": "TCP", 00:12:51.837 "adrfam": "IPv4", 00:12:51.837 "traddr": "10.0.0.1", 00:12:51.837 "trsvcid": "35556" 00:12:51.837 }, 00:12:51.837 "auth": { 00:12:51.837 "state": "completed", 00:12:51.837 "digest": "sha512", 00:12:51.837 "dhgroup": "ffdhe2048" 00:12:51.837 } 00:12:51.837 } 00:12:51.837 ]' 00:12:51.837 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.837 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.837 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.837 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:51.837 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.837 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.837 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.837 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.094 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:12:52.661 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.919 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:52.919 05:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.919 05:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.919 05:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.919 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:52.919 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.919 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:52.919 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:53.176 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:53.176 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.176 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:53.176 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:53.176 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:53.176 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.176 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.176 05:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.176 05:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.176 05:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.176 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.176 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.433 00:12:53.433 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.433 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.433 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.690 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.690 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.690 05:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.690 05:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.690 05:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.690 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.690 { 00:12:53.690 "cntlid": 113, 00:12:53.690 "qid": 0, 00:12:53.690 "state": "enabled", 00:12:53.690 "thread": "nvmf_tgt_poll_group_000", 00:12:53.690 "listen_address": { 00:12:53.690 "trtype": "TCP", 00:12:53.690 "adrfam": "IPv4", 00:12:53.690 "traddr": "10.0.0.2", 00:12:53.690 "trsvcid": "4420" 00:12:53.690 }, 00:12:53.690 "peer_address": { 00:12:53.690 "trtype": "TCP", 00:12:53.690 "adrfam": "IPv4", 00:12:53.690 "traddr": "10.0.0.1", 00:12:53.690 "trsvcid": "35586" 00:12:53.690 }, 00:12:53.690 "auth": { 00:12:53.690 "state": "completed", 00:12:53.690 "digest": "sha512", 00:12:53.690 "dhgroup": "ffdhe3072" 00:12:53.690 } 00:12:53.690 } 00:12:53.690 ]' 00:12:53.690 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.690 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.690 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.690 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:53.948 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.948 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.948 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.948 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.206 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:12:54.773 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.773 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:54.773 05:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.773 05:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.773 05:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.773 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.773 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:54.773 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:55.032 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:55.032 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:55.032 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:55.032 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:55.032 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:55.032 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.032 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.032 05:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.032 05:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.032 05:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.032 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.032 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.291 00:12:55.291 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.291 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.291 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.549 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.549 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.549 05:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.549 05:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.549 05:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.549 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.549 { 00:12:55.549 "cntlid": 115, 00:12:55.549 "qid": 0, 00:12:55.549 "state": "enabled", 00:12:55.549 "thread": "nvmf_tgt_poll_group_000", 00:12:55.549 "listen_address": { 00:12:55.549 "trtype": "TCP", 00:12:55.549 "adrfam": "IPv4", 00:12:55.549 "traddr": "10.0.0.2", 00:12:55.549 "trsvcid": "4420" 00:12:55.549 }, 00:12:55.549 "peer_address": { 00:12:55.549 "trtype": "TCP", 00:12:55.549 "adrfam": "IPv4", 00:12:55.549 "traddr": "10.0.0.1", 00:12:55.549 "trsvcid": "51692" 00:12:55.549 }, 00:12:55.549 "auth": { 00:12:55.549 "state": "completed", 00:12:55.549 "digest": "sha512", 00:12:55.549 "dhgroup": "ffdhe3072" 00:12:55.549 } 00:12:55.549 } 00:12:55.549 ]' 00:12:55.549 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.549 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.549 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.807 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:55.807 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.807 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.807 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.807 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.064 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:12:56.632 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.632 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:56.632 05:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.632 05:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.632 05:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.632 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:56.632 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:56.632 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:56.891 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:56.891 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:56.891 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:56.891 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:56.891 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:56.891 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.891 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.891 05:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.891 05:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.891 05:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.891 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.891 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.150 00:12:57.150 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:57.150 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.150 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:57.408 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.408 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.408 05:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.408 05:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.408 05:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.408 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:57.408 { 00:12:57.408 "cntlid": 117, 00:12:57.408 "qid": 0, 00:12:57.408 "state": "enabled", 00:12:57.408 "thread": "nvmf_tgt_poll_group_000", 00:12:57.408 "listen_address": { 00:12:57.408 "trtype": "TCP", 00:12:57.408 "adrfam": "IPv4", 00:12:57.408 "traddr": "10.0.0.2", 00:12:57.408 "trsvcid": "4420" 00:12:57.408 }, 00:12:57.408 "peer_address": { 00:12:57.408 "trtype": "TCP", 00:12:57.408 "adrfam": "IPv4", 00:12:57.408 "traddr": "10.0.0.1", 00:12:57.408 "trsvcid": "51726" 00:12:57.408 }, 00:12:57.408 "auth": { 00:12:57.408 "state": "completed", 00:12:57.408 "digest": "sha512", 00:12:57.408 "dhgroup": "ffdhe3072" 00:12:57.408 } 00:12:57.408 } 00:12:57.408 ]' 00:12:57.408 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:57.667 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.667 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:57.667 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:57.667 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:57.667 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.667 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.667 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.926 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:12:58.493 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.493 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:12:58.493 05:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.493 05:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.493 05:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.493 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:58.493 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:58.493 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:58.752 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:58.752 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:58.752 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:58.752 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:58.752 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:58.752 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.753 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:12:58.753 05:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.753 05:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.753 05:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.753 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:58.753 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:59.319 00:12:59.319 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:59.319 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.319 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:59.596 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.596 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.596 05:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.596 05:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.596 05:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.596 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:59.596 { 00:12:59.596 "cntlid": 119, 00:12:59.596 "qid": 0, 00:12:59.596 "state": "enabled", 00:12:59.596 "thread": "nvmf_tgt_poll_group_000", 00:12:59.596 "listen_address": { 00:12:59.596 "trtype": "TCP", 00:12:59.596 "adrfam": "IPv4", 00:12:59.596 "traddr": "10.0.0.2", 00:12:59.596 "trsvcid": "4420" 00:12:59.596 }, 00:12:59.596 "peer_address": { 00:12:59.596 "trtype": "TCP", 00:12:59.596 "adrfam": "IPv4", 00:12:59.596 "traddr": "10.0.0.1", 00:12:59.596 "trsvcid": "51750" 00:12:59.596 }, 00:12:59.596 "auth": { 00:12:59.596 "state": "completed", 00:12:59.596 "digest": "sha512", 00:12:59.596 "dhgroup": "ffdhe3072" 00:12:59.596 } 00:12:59.596 } 00:12:59.596 ]' 00:12:59.596 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:59.596 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.596 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:59.596 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:59.596 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:59.596 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.596 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.596 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.869 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:13:00.437 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.437 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:00.437 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.437 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.437 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.437 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:00.437 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.437 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:00.437 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:00.696 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:13:00.696 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.696 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:00.696 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:00.696 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:00.696 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.696 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.696 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.696 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.696 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.696 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.696 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.954 00:13:00.954 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.954 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.954 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.213 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.213 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.213 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.213 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.471 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.471 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.471 { 00:13:01.471 "cntlid": 121, 00:13:01.471 "qid": 0, 00:13:01.471 "state": "enabled", 00:13:01.471 "thread": "nvmf_tgt_poll_group_000", 00:13:01.471 "listen_address": { 00:13:01.471 "trtype": "TCP", 00:13:01.471 "adrfam": "IPv4", 00:13:01.471 "traddr": "10.0.0.2", 00:13:01.471 "trsvcid": "4420" 00:13:01.471 }, 00:13:01.471 "peer_address": { 00:13:01.471 "trtype": "TCP", 00:13:01.471 "adrfam": "IPv4", 00:13:01.471 "traddr": "10.0.0.1", 00:13:01.471 "trsvcid": "51760" 00:13:01.471 }, 00:13:01.471 "auth": { 00:13:01.471 "state": "completed", 00:13:01.471 "digest": "sha512", 00:13:01.471 "dhgroup": "ffdhe4096" 00:13:01.471 } 00:13:01.471 } 00:13:01.471 ]' 00:13:01.471 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.471 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.471 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.471 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:01.471 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.471 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.471 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.471 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.728 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:13:02.293 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.293 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:02.293 05:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.293 05:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.293 05:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.293 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.293 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:02.293 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:02.551 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:13:02.551 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:02.551 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:02.551 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:02.551 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:02.551 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.551 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.551 05:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.551 05:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.551 05:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.551 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.551 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.118 00:13:03.118 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.118 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.118 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.376 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.376 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.376 05:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.376 05:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.376 05:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.376 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.376 { 00:13:03.376 "cntlid": 123, 00:13:03.376 "qid": 0, 00:13:03.376 "state": "enabled", 00:13:03.376 "thread": "nvmf_tgt_poll_group_000", 00:13:03.376 "listen_address": { 00:13:03.376 "trtype": "TCP", 00:13:03.377 "adrfam": "IPv4", 00:13:03.377 "traddr": "10.0.0.2", 00:13:03.377 "trsvcid": "4420" 00:13:03.377 }, 00:13:03.377 "peer_address": { 00:13:03.377 "trtype": "TCP", 00:13:03.377 "adrfam": "IPv4", 00:13:03.377 "traddr": "10.0.0.1", 00:13:03.377 "trsvcid": "51790" 00:13:03.377 }, 00:13:03.377 "auth": { 00:13:03.377 "state": "completed", 00:13:03.377 "digest": "sha512", 00:13:03.377 "dhgroup": "ffdhe4096" 00:13:03.377 } 00:13:03.377 } 00:13:03.377 ]' 00:13:03.377 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.377 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.377 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.377 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:03.377 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.377 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.377 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.377 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.635 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:13:04.201 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.201 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:04.201 05:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.201 05:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.459 05:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.459 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:04.459 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:04.459 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:04.459 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:13:04.459 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.459 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:04.459 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:04.459 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:04.459 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.459 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.459 05:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.459 05:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.459 05:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.459 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.459 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.025 00:13:05.025 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.025 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.025 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.284 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.284 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.284 05:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.284 05:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.284 05:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.284 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.284 { 00:13:05.284 "cntlid": 125, 00:13:05.284 "qid": 0, 00:13:05.284 "state": "enabled", 00:13:05.284 "thread": "nvmf_tgt_poll_group_000", 00:13:05.284 "listen_address": { 00:13:05.284 "trtype": "TCP", 00:13:05.284 "adrfam": "IPv4", 00:13:05.284 "traddr": "10.0.0.2", 00:13:05.284 "trsvcid": "4420" 00:13:05.284 }, 00:13:05.284 "peer_address": { 00:13:05.284 "trtype": "TCP", 00:13:05.284 "adrfam": "IPv4", 00:13:05.284 "traddr": "10.0.0.1", 00:13:05.284 "trsvcid": "57672" 00:13:05.284 }, 00:13:05.284 "auth": { 00:13:05.284 "state": "completed", 00:13:05.284 "digest": "sha512", 00:13:05.284 "dhgroup": "ffdhe4096" 00:13:05.284 } 00:13:05.284 } 00:13:05.284 ]' 00:13:05.284 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.284 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:05.284 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.284 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:05.284 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.284 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.284 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.284 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.543 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:13:06.109 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.109 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:06.109 05:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.109 05:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.109 05:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.109 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.109 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:06.109 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:06.367 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:13:06.367 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.367 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:06.367 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:06.367 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:06.367 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.367 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:13:06.367 05:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.367 05:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.367 05:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.367 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:06.367 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:06.934 00:13:06.934 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:06.934 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:06.934 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.192 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.192 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.192 05:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.192 05:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.192 05:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.192 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:07.192 { 00:13:07.192 "cntlid": 127, 00:13:07.192 "qid": 0, 00:13:07.192 "state": "enabled", 00:13:07.192 "thread": "nvmf_tgt_poll_group_000", 00:13:07.192 "listen_address": { 00:13:07.192 "trtype": "TCP", 00:13:07.192 "adrfam": "IPv4", 00:13:07.192 "traddr": "10.0.0.2", 00:13:07.192 "trsvcid": "4420" 00:13:07.192 }, 00:13:07.192 "peer_address": { 00:13:07.192 "trtype": "TCP", 00:13:07.192 "adrfam": "IPv4", 00:13:07.192 "traddr": "10.0.0.1", 00:13:07.192 "trsvcid": "57700" 00:13:07.192 }, 00:13:07.192 "auth": { 00:13:07.192 "state": "completed", 00:13:07.192 "digest": "sha512", 00:13:07.192 "dhgroup": "ffdhe4096" 00:13:07.192 } 00:13:07.192 } 00:13:07.192 ]' 00:13:07.192 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:07.192 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:07.192 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:07.192 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:07.192 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:07.192 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.192 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.192 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.451 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:13:08.387 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.387 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:08.387 05:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.387 05:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.387 05:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.387 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:08.387 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:08.387 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:08.387 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:08.387 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:13:08.387 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:08.387 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:08.387 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:08.387 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:08.387 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.387 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.387 06:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.387 06:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.387 06:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.387 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.387 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.955 00:13:08.955 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:08.955 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:08.955 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.213 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.213 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.213 06:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.213 06:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.213 06:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.213 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:09.213 { 00:13:09.213 "cntlid": 129, 00:13:09.213 "qid": 0, 00:13:09.213 "state": "enabled", 00:13:09.213 "thread": "nvmf_tgt_poll_group_000", 00:13:09.213 "listen_address": { 00:13:09.213 "trtype": "TCP", 00:13:09.213 "adrfam": "IPv4", 00:13:09.213 "traddr": "10.0.0.2", 00:13:09.213 "trsvcid": "4420" 00:13:09.213 }, 00:13:09.213 "peer_address": { 00:13:09.213 "trtype": "TCP", 00:13:09.213 "adrfam": "IPv4", 00:13:09.213 "traddr": "10.0.0.1", 00:13:09.213 "trsvcid": "57722" 00:13:09.213 }, 00:13:09.213 "auth": { 00:13:09.213 "state": "completed", 00:13:09.213 "digest": "sha512", 00:13:09.213 "dhgroup": "ffdhe6144" 00:13:09.213 } 00:13:09.213 } 00:13:09.213 ]' 00:13:09.213 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:09.213 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:09.213 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:09.213 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:09.213 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:09.213 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.213 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.213 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.472 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:13:10.404 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.404 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:10.404 06:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.404 06:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.404 06:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.404 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:10.404 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:10.404 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:10.661 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:13:10.661 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:10.661 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:10.661 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:10.661 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:10.661 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.661 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.661 06:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.661 06:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.661 06:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.661 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.661 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.919 00:13:11.176 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:11.176 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:11.176 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.434 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.434 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.434 06:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.434 06:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.434 06:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.434 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:11.434 { 00:13:11.434 "cntlid": 131, 00:13:11.434 "qid": 0, 00:13:11.434 "state": "enabled", 00:13:11.434 "thread": "nvmf_tgt_poll_group_000", 00:13:11.434 "listen_address": { 00:13:11.434 "trtype": "TCP", 00:13:11.434 "adrfam": "IPv4", 00:13:11.434 "traddr": "10.0.0.2", 00:13:11.434 "trsvcid": "4420" 00:13:11.434 }, 00:13:11.434 "peer_address": { 00:13:11.434 "trtype": "TCP", 00:13:11.434 "adrfam": "IPv4", 00:13:11.434 "traddr": "10.0.0.1", 00:13:11.434 "trsvcid": "57754" 00:13:11.434 }, 00:13:11.434 "auth": { 00:13:11.434 "state": "completed", 00:13:11.434 "digest": "sha512", 00:13:11.434 "dhgroup": "ffdhe6144" 00:13:11.434 } 00:13:11.434 } 00:13:11.434 ]' 00:13:11.434 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:11.434 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.434 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:11.434 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:11.434 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:11.434 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.434 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.434 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.693 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:13:12.628 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.628 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:12.628 06:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.628 06:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.628 06:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.628 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.628 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:12.628 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:12.887 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:13:12.887 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.887 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:12.887 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:12.887 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:12.887 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.887 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.887 06:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.887 06:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.887 06:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.887 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.887 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.146 00:13:13.404 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:13.404 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.404 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.662 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.662 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.662 06:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.662 06:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.662 06:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.662 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.662 { 00:13:13.662 "cntlid": 133, 00:13:13.662 "qid": 0, 00:13:13.662 "state": "enabled", 00:13:13.662 "thread": "nvmf_tgt_poll_group_000", 00:13:13.662 "listen_address": { 00:13:13.662 "trtype": "TCP", 00:13:13.662 "adrfam": "IPv4", 00:13:13.662 "traddr": "10.0.0.2", 00:13:13.662 "trsvcid": "4420" 00:13:13.662 }, 00:13:13.662 "peer_address": { 00:13:13.662 "trtype": "TCP", 00:13:13.662 "adrfam": "IPv4", 00:13:13.662 "traddr": "10.0.0.1", 00:13:13.662 "trsvcid": "57778" 00:13:13.662 }, 00:13:13.662 "auth": { 00:13:13.662 "state": "completed", 00:13:13.662 "digest": "sha512", 00:13:13.662 "dhgroup": "ffdhe6144" 00:13:13.662 } 00:13:13.662 } 00:13:13.662 ]' 00:13:13.662 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.662 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.662 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.662 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:13.662 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.662 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.662 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.662 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.920 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:13:14.904 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.904 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:14.904 06:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.904 06:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.904 06:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.904 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.904 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:14.904 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:14.904 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:13:14.904 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.904 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:14.904 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:14.904 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:14.904 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.905 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:13:14.905 06:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.905 06:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.905 06:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.905 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:14.905 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:15.470 00:13:15.470 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:15.470 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:15.470 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.727 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.727 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.727 06:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.727 06:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.727 06:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.727 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.727 { 00:13:15.727 "cntlid": 135, 00:13:15.727 "qid": 0, 00:13:15.727 "state": "enabled", 00:13:15.727 "thread": "nvmf_tgt_poll_group_000", 00:13:15.727 "listen_address": { 00:13:15.727 "trtype": "TCP", 00:13:15.727 "adrfam": "IPv4", 00:13:15.727 "traddr": "10.0.0.2", 00:13:15.727 "trsvcid": "4420" 00:13:15.727 }, 00:13:15.727 "peer_address": { 00:13:15.727 "trtype": "TCP", 00:13:15.727 "adrfam": "IPv4", 00:13:15.727 "traddr": "10.0.0.1", 00:13:15.727 "trsvcid": "39920" 00:13:15.727 }, 00:13:15.727 "auth": { 00:13:15.727 "state": "completed", 00:13:15.727 "digest": "sha512", 00:13:15.727 "dhgroup": "ffdhe6144" 00:13:15.727 } 00:13:15.727 } 00:13:15.727 ]' 00:13:15.727 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.727 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:15.727 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.727 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:15.727 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.727 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.727 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.727 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.985 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:13:16.550 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.550 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:16.550 06:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.550 06:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.550 06:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.550 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:16.550 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.550 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:16.551 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:16.808 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:13:16.808 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.809 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:16.809 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:16.809 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:16.809 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.809 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.809 06:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.809 06:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.809 06:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.809 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.809 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.375 00:13:17.375 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.375 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.375 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.632 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.632 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.632 06:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.632 06:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.632 06:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.632 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.632 { 00:13:17.632 "cntlid": 137, 00:13:17.632 "qid": 0, 00:13:17.632 "state": "enabled", 00:13:17.632 "thread": "nvmf_tgt_poll_group_000", 00:13:17.632 "listen_address": { 00:13:17.632 "trtype": "TCP", 00:13:17.632 "adrfam": "IPv4", 00:13:17.632 "traddr": "10.0.0.2", 00:13:17.632 "trsvcid": "4420" 00:13:17.632 }, 00:13:17.632 "peer_address": { 00:13:17.632 "trtype": "TCP", 00:13:17.632 "adrfam": "IPv4", 00:13:17.632 "traddr": "10.0.0.1", 00:13:17.632 "trsvcid": "39946" 00:13:17.632 }, 00:13:17.632 "auth": { 00:13:17.632 "state": "completed", 00:13:17.632 "digest": "sha512", 00:13:17.632 "dhgroup": "ffdhe8192" 00:13:17.632 } 00:13:17.632 } 00:13:17.632 ]' 00:13:17.890 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.890 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.890 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.890 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:17.890 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.890 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.890 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.890 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.148 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:13:18.714 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.714 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:18.714 06:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.714 06:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.714 06:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.714 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.714 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:18.714 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:18.973 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:13:18.973 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:18.973 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:18.973 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:18.973 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:18.973 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.973 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.973 06:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.973 06:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.973 06:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.973 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.973 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.539 00:13:19.539 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.539 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.539 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.797 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.797 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.797 06:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.797 06:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.797 06:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.797 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.797 { 00:13:19.797 "cntlid": 139, 00:13:19.797 "qid": 0, 00:13:19.797 "state": "enabled", 00:13:19.797 "thread": "nvmf_tgt_poll_group_000", 00:13:19.797 "listen_address": { 00:13:19.797 "trtype": "TCP", 00:13:19.797 "adrfam": "IPv4", 00:13:19.797 "traddr": "10.0.0.2", 00:13:19.797 "trsvcid": "4420" 00:13:19.797 }, 00:13:19.797 "peer_address": { 00:13:19.797 "trtype": "TCP", 00:13:19.797 "adrfam": "IPv4", 00:13:19.797 "traddr": "10.0.0.1", 00:13:19.797 "trsvcid": "39974" 00:13:19.797 }, 00:13:19.797 "auth": { 00:13:19.797 "state": "completed", 00:13:19.797 "digest": "sha512", 00:13:19.797 "dhgroup": "ffdhe8192" 00:13:19.797 } 00:13:19.797 } 00:13:19.797 ]' 00:13:19.797 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.797 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.797 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.797 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:19.797 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:20.055 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.055 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.055 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.312 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:01:MTkwODAxY2I3NjM4ZTBiNjMzZjIyNGZhOTNmYjQ4ZGQkfuOA: --dhchap-ctrl-secret DHHC-1:02:OGNiMDE4MTFmZWQwNTFiNGViOTYxYjNlZDZkMTJjOWVjYmNiNWNlZGU2YTRlOGFl9/17zw==: 00:13:20.878 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.878 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:20.878 06:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.878 06:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.878 06:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.878 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.878 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:20.878 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:21.136 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:13:21.136 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:21.136 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:21.136 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:21.136 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:21.136 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.137 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.137 06:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.137 06:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.137 06:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.137 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.137 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.702 00:13:21.702 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.702 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.702 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.961 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.961 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.961 06:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.961 06:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.961 06:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.961 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.961 { 00:13:21.961 "cntlid": 141, 00:13:21.961 "qid": 0, 00:13:21.961 "state": "enabled", 00:13:21.961 "thread": "nvmf_tgt_poll_group_000", 00:13:21.961 "listen_address": { 00:13:21.961 "trtype": "TCP", 00:13:21.961 "adrfam": "IPv4", 00:13:21.961 "traddr": "10.0.0.2", 00:13:21.961 "trsvcid": "4420" 00:13:21.961 }, 00:13:21.961 "peer_address": { 00:13:21.961 "trtype": "TCP", 00:13:21.961 "adrfam": "IPv4", 00:13:21.961 "traddr": "10.0.0.1", 00:13:21.961 "trsvcid": "39988" 00:13:21.961 }, 00:13:21.961 "auth": { 00:13:21.961 "state": "completed", 00:13:21.961 "digest": "sha512", 00:13:21.961 "dhgroup": "ffdhe8192" 00:13:21.961 } 00:13:21.961 } 00:13:21.961 ]' 00:13:21.961 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.961 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.961 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:22.220 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:22.220 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:22.220 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.220 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.220 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.480 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:02:ZjJjZjMyZjRkNDgyNDk4ODc0NzE4NjQ5NDljZDk5ZWFjY2EzMzM2MTU3ODk0MDExsoS2ug==: --dhchap-ctrl-secret DHHC-1:01:ZjU5ZTk2MzljZjcyZDJhZjNhYjYwZDQ3NzM1NjBmNWM8aNZx: 00:13:23.060 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.060 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:23.060 06:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.060 06:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.060 06:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.060 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.060 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:23.060 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:23.318 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:23.318 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.318 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:23.318 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:23.318 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:23.318 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.318 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:13:23.318 06:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.318 06:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.318 06:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.318 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:23.318 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:23.883 00:13:23.883 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:23.883 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.883 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.141 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.141 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.141 06:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.141 06:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.141 06:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.141 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.141 { 00:13:24.141 "cntlid": 143, 00:13:24.141 "qid": 0, 00:13:24.141 "state": "enabled", 00:13:24.141 "thread": "nvmf_tgt_poll_group_000", 00:13:24.141 "listen_address": { 00:13:24.141 "trtype": "TCP", 00:13:24.141 "adrfam": "IPv4", 00:13:24.141 "traddr": "10.0.0.2", 00:13:24.141 "trsvcid": "4420" 00:13:24.141 }, 00:13:24.141 "peer_address": { 00:13:24.141 "trtype": "TCP", 00:13:24.141 "adrfam": "IPv4", 00:13:24.141 "traddr": "10.0.0.1", 00:13:24.141 "trsvcid": "40020" 00:13:24.141 }, 00:13:24.141 "auth": { 00:13:24.141 "state": "completed", 00:13:24.141 "digest": "sha512", 00:13:24.141 "dhgroup": "ffdhe8192" 00:13:24.141 } 00:13:24.141 } 00:13:24.141 ]' 00:13:24.141 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:24.141 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:24.141 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:24.398 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:24.398 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:24.398 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.398 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.398 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.655 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:13:25.221 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.221 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:25.222 06:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.222 06:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.222 06:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.222 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:25.222 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:25.222 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:25.222 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:25.222 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:25.222 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:25.485 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:25.486 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:25.486 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:25.486 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:25.486 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:25.486 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.486 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.486 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.486 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.486 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.486 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.486 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.057 00:13:26.057 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.057 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.057 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.315 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.315 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.315 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.315 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.315 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.315 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:26.315 { 00:13:26.315 "cntlid": 145, 00:13:26.315 "qid": 0, 00:13:26.315 "state": "enabled", 00:13:26.315 "thread": "nvmf_tgt_poll_group_000", 00:13:26.315 "listen_address": { 00:13:26.315 "trtype": "TCP", 00:13:26.315 "adrfam": "IPv4", 00:13:26.315 "traddr": "10.0.0.2", 00:13:26.315 "trsvcid": "4420" 00:13:26.315 }, 00:13:26.315 "peer_address": { 00:13:26.315 "trtype": "TCP", 00:13:26.315 "adrfam": "IPv4", 00:13:26.315 "traddr": "10.0.0.1", 00:13:26.315 "trsvcid": "45928" 00:13:26.315 }, 00:13:26.315 "auth": { 00:13:26.315 "state": "completed", 00:13:26.315 "digest": "sha512", 00:13:26.315 "dhgroup": "ffdhe8192" 00:13:26.315 } 00:13:26.315 } 00:13:26.315 ]' 00:13:26.315 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:26.315 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.315 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.573 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:26.573 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.573 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.573 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.573 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.832 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:00:MjQ4NDJmYzI5NGU2NWMzMzU1ZGQwODYwMTM4ZmE0OGFlMDk5YWViYWIxNjVjNDQ5rqs7Lw==: --dhchap-ctrl-secret DHHC-1:03:OGZjZjFhZDIwZmU0Zjk1MzNjNjU4YWI4NDk2ZGY3ZTRiNGQzMTVhNTM2YWY0OWJhNDhhMjk4ZjU1MzM0MjA5M0RZaH8=: 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:27.400 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:27.965 request: 00:13:27.965 { 00:13:27.965 "name": "nvme0", 00:13:27.965 "trtype": "tcp", 00:13:27.965 "traddr": "10.0.0.2", 00:13:27.965 "adrfam": "ipv4", 00:13:27.965 "trsvcid": "4420", 00:13:27.965 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:27.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce", 00:13:27.965 "prchk_reftag": false, 00:13:27.965 "prchk_guard": false, 00:13:27.965 "hdgst": false, 00:13:27.965 "ddgst": false, 00:13:27.965 "dhchap_key": "key2", 00:13:27.965 "method": "bdev_nvme_attach_controller", 00:13:27.965 "req_id": 1 00:13:27.965 } 00:13:27.965 Got JSON-RPC error response 00:13:27.965 response: 00:13:27.965 { 00:13:27.965 "code": -5, 00:13:27.965 "message": "Input/output error" 00:13:27.965 } 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:27.965 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:28.529 request: 00:13:28.529 { 00:13:28.529 "name": "nvme0", 00:13:28.529 "trtype": "tcp", 00:13:28.529 "traddr": "10.0.0.2", 00:13:28.529 "adrfam": "ipv4", 00:13:28.529 "trsvcid": "4420", 00:13:28.529 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:28.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce", 00:13:28.529 "prchk_reftag": false, 00:13:28.529 "prchk_guard": false, 00:13:28.529 "hdgst": false, 00:13:28.529 "ddgst": false, 00:13:28.529 "dhchap_key": "key1", 00:13:28.529 "dhchap_ctrlr_key": "ckey2", 00:13:28.529 "method": "bdev_nvme_attach_controller", 00:13:28.529 "req_id": 1 00:13:28.529 } 00:13:28.529 Got JSON-RPC error response 00:13:28.529 response: 00:13:28.529 { 00:13:28.529 "code": -5, 00:13:28.529 "message": "Input/output error" 00:13:28.529 } 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key1 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.529 06:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.097 request: 00:13:29.097 { 00:13:29.097 "name": "nvme0", 00:13:29.097 "trtype": "tcp", 00:13:29.097 "traddr": "10.0.0.2", 00:13:29.097 "adrfam": "ipv4", 00:13:29.097 "trsvcid": "4420", 00:13:29.097 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:29.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce", 00:13:29.097 "prchk_reftag": false, 00:13:29.097 "prchk_guard": false, 00:13:29.097 "hdgst": false, 00:13:29.097 "ddgst": false, 00:13:29.097 "dhchap_key": "key1", 00:13:29.097 "dhchap_ctrlr_key": "ckey1", 00:13:29.097 "method": "bdev_nvme_attach_controller", 00:13:29.097 "req_id": 1 00:13:29.097 } 00:13:29.097 Got JSON-RPC error response 00:13:29.097 response: 00:13:29.097 { 00:13:29.097 "code": -5, 00:13:29.097 "message": "Input/output error" 00:13:29.097 } 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 80820 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 80820 ']' 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 80820 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80820 00:13:29.097 killing process with pid 80820 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80820' 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 80820 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 80820 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=83775 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 83775 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 83775 ']' 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.097 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.471 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.471 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:30.471 06:00:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.471 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:30.471 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.471 06:00:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.471 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:30.471 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 83775 00:13:30.471 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 83775 ']' 00:13:30.471 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.471 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.471 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.471 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.471 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:30.471 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:31.045 00:13:31.045 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:31.045 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.045 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:31.303 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.303 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.303 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.303 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.303 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.303 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:31.303 { 00:13:31.303 "cntlid": 1, 00:13:31.303 "qid": 0, 00:13:31.303 "state": "enabled", 00:13:31.303 "thread": "nvmf_tgt_poll_group_000", 00:13:31.303 "listen_address": { 00:13:31.303 "trtype": "TCP", 00:13:31.303 "adrfam": "IPv4", 00:13:31.303 "traddr": "10.0.0.2", 00:13:31.303 "trsvcid": "4420" 00:13:31.303 }, 00:13:31.303 "peer_address": { 00:13:31.303 "trtype": "TCP", 00:13:31.303 "adrfam": "IPv4", 00:13:31.303 "traddr": "10.0.0.1", 00:13:31.303 "trsvcid": "45998" 00:13:31.303 }, 00:13:31.303 "auth": { 00:13:31.303 "state": "completed", 00:13:31.303 "digest": "sha512", 00:13:31.303 "dhgroup": "ffdhe8192" 00:13:31.303 } 00:13:31.303 } 00:13:31.303 ]' 00:13:31.303 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:31.303 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.303 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:31.562 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:31.562 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:31.562 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.562 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.562 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.820 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid d95af516-4532-4483-a837-b3cd72acabce --dhchap-secret DHHC-1:03:NjBiMTNlYjAxMDVmNzNhYWVhODU3MzFjYmI4NDI5NzA3Njc3OTJjMjNiMTQ2NGI3MzAxZDAwMjEzZGViZTllYqy7AwE=: 00:13:32.385 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.385 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:32.386 06:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.386 06:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.386 06:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.386 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --dhchap-key key3 00:13:32.386 06:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.386 06:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.386 06:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.386 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:32.386 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:32.644 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:32.644 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:32.644 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:32.644 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:32.644 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:32.644 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:32.644 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:32.644 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:32.644 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:32.903 request: 00:13:32.903 { 00:13:32.903 "name": "nvme0", 00:13:32.903 "trtype": "tcp", 00:13:32.903 "traddr": "10.0.0.2", 00:13:32.903 "adrfam": "ipv4", 00:13:32.903 "trsvcid": "4420", 00:13:32.903 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:32.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce", 00:13:32.903 "prchk_reftag": false, 00:13:32.903 "prchk_guard": false, 00:13:32.903 "hdgst": false, 00:13:32.904 "ddgst": false, 00:13:32.904 "dhchap_key": "key3", 00:13:32.904 "method": "bdev_nvme_attach_controller", 00:13:32.904 "req_id": 1 00:13:32.904 } 00:13:32.904 Got JSON-RPC error response 00:13:32.904 response: 00:13:32.904 { 00:13:32.904 "code": -5, 00:13:32.904 "message": "Input/output error" 00:13:32.904 } 00:13:32.904 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:32.904 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:32.904 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:32.904 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:32.904 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:32.904 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:32.904 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:32.904 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:33.163 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:33.163 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:33.163 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:33.163 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:33.163 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.163 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:33.163 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.163 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:33.163 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:33.420 request: 00:13:33.420 { 00:13:33.420 "name": "nvme0", 00:13:33.420 "trtype": "tcp", 00:13:33.420 "traddr": "10.0.0.2", 00:13:33.420 "adrfam": "ipv4", 00:13:33.420 "trsvcid": "4420", 00:13:33.420 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:33.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce", 00:13:33.420 "prchk_reftag": false, 00:13:33.420 "prchk_guard": false, 00:13:33.420 "hdgst": false, 00:13:33.420 "ddgst": false, 00:13:33.420 "dhchap_key": "key3", 00:13:33.420 "method": "bdev_nvme_attach_controller", 00:13:33.420 "req_id": 1 00:13:33.420 } 00:13:33.420 Got JSON-RPC error response 00:13:33.420 response: 00:13:33.420 { 00:13:33.420 "code": -5, 00:13:33.420 "message": "Input/output error" 00:13:33.420 } 00:13:33.420 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:33.420 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:33.420 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:33.420 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:33.420 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:33.420 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:33.420 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:33.420 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:33.420 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:33.420 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:33.678 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:33.937 request: 00:13:33.937 { 00:13:33.937 "name": "nvme0", 00:13:33.937 "trtype": "tcp", 00:13:33.937 "traddr": "10.0.0.2", 00:13:33.937 "adrfam": "ipv4", 00:13:33.937 "trsvcid": "4420", 00:13:33.937 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:33.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce", 00:13:33.937 "prchk_reftag": false, 00:13:33.937 "prchk_guard": false, 00:13:33.937 "hdgst": false, 00:13:33.937 "ddgst": false, 00:13:33.937 "dhchap_key": "key0", 00:13:33.937 "dhchap_ctrlr_key": "key1", 00:13:33.937 "method": "bdev_nvme_attach_controller", 00:13:33.937 "req_id": 1 00:13:33.937 } 00:13:33.937 Got JSON-RPC error response 00:13:33.937 response: 00:13:33.937 { 00:13:33.937 "code": -5, 00:13:33.937 "message": "Input/output error" 00:13:33.937 } 00:13:33.937 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:33.937 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:33.937 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:33.937 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:33.937 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:33.937 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:34.197 00:13:34.197 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:34.197 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:34.197 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.455 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.455 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.455 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.714 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:34.714 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:34.714 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 80852 00:13:34.714 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 80852 ']' 00:13:34.714 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 80852 00:13:34.714 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:34.714 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:34.714 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80852 00:13:34.714 killing process with pid 80852 00:13:34.714 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:34.714 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:34.714 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80852' 00:13:34.714 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 80852 00:13:34.714 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 80852 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:34.974 rmmod nvme_tcp 00:13:34.974 rmmod nvme_fabrics 00:13:34.974 rmmod nvme_keyring 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 83775 ']' 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 83775 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 83775 ']' 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 83775 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83775 00:13:34.974 killing process with pid 83775 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83775' 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 83775 00:13:34.974 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 83775 00:13:35.236 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:35.236 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:35.236 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:35.236 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:35.236 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:35.236 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.236 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.236 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.236 06:00:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:35.236 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.KFI /tmp/spdk.key-sha256.y2Q /tmp/spdk.key-sha384.fL1 /tmp/spdk.key-sha512.qW6 /tmp/spdk.key-sha512.SY8 /tmp/spdk.key-sha384.ww2 /tmp/spdk.key-sha256.KT8 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:35.236 00:13:35.236 real 2m39.022s 00:13:35.236 user 6m20.463s 00:13:35.236 sys 0m24.353s 00:13:35.236 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:35.236 ************************************ 00:13:35.236 END TEST nvmf_auth_target 00:13:35.236 ************************************ 00:13:35.236 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.236 06:00:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:35.236 06:00:26 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:13:35.236 06:00:26 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:35.236 06:00:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:35.236 06:00:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.236 06:00:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:35.236 ************************************ 00:13:35.236 START TEST nvmf_bdevio_no_huge 00:13:35.236 ************************************ 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:35.236 * Looking for test storage... 00:13:35.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.236 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:35.237 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:35.497 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:35.497 Cannot find device "nvmf_tgt_br" 00:13:35.497 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:35.497 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:35.497 Cannot find device "nvmf_tgt_br2" 00:13:35.497 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:35.497 06:00:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:35.497 Cannot find device "nvmf_tgt_br" 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:35.497 Cannot find device "nvmf_tgt_br2" 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:35.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:35.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:35.497 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:35.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:13:35.756 00:13:35.756 --- 10.0.0.2 ping statistics --- 00:13:35.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.756 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:35.756 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:35.756 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:13:35.756 00:13:35.756 --- 10.0.0.3 ping statistics --- 00:13:35.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.756 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:35.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:35.756 00:13:35.756 --- 10.0.0.1 ping statistics --- 00:13:35.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.756 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=84075 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 84075 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 84075 ']' 00:13:35.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.756 06:00:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:35.756 [2024-07-13 06:00:27.385476] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:35.756 [2024-07-13 06:00:27.385604] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:36.016 [2024-07-13 06:00:27.530613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.016 [2024-07-13 06:00:27.629632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.016 [2024-07-13 06:00:27.630196] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.016 [2024-07-13 06:00:27.630719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.016 [2024-07-13 06:00:27.631318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.016 [2024-07-13 06:00:27.631648] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.016 [2024-07-13 06:00:27.632012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:36.016 [2024-07-13 06:00:27.632123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:36.016 [2024-07-13 06:00:27.632270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:36.016 [2024-07-13 06:00:27.632277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.016 [2024-07-13 06:00:27.638530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:36.951 [2024-07-13 06:00:28.416998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:36.951 Malloc0 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:36.951 [2024-07-13 06:00:28.455148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:36.951 { 00:13:36.951 "params": { 00:13:36.951 "name": "Nvme$subsystem", 00:13:36.951 "trtype": "$TEST_TRANSPORT", 00:13:36.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:36.951 "adrfam": "ipv4", 00:13:36.951 "trsvcid": "$NVMF_PORT", 00:13:36.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:36.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:36.951 "hdgst": ${hdgst:-false}, 00:13:36.951 "ddgst": ${ddgst:-false} 00:13:36.951 }, 00:13:36.951 "method": "bdev_nvme_attach_controller" 00:13:36.951 } 00:13:36.951 EOF 00:13:36.951 )") 00:13:36.951 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:36.952 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:36.952 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:36.952 06:00:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:36.952 "params": { 00:13:36.952 "name": "Nvme1", 00:13:36.952 "trtype": "tcp", 00:13:36.952 "traddr": "10.0.0.2", 00:13:36.952 "adrfam": "ipv4", 00:13:36.952 "trsvcid": "4420", 00:13:36.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:36.952 "hdgst": false, 00:13:36.952 "ddgst": false 00:13:36.952 }, 00:13:36.952 "method": "bdev_nvme_attach_controller" 00:13:36.952 }' 00:13:36.952 [2024-07-13 06:00:28.506322] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:36.952 [2024-07-13 06:00:28.506444] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid84114 ] 00:13:36.952 [2024-07-13 06:00:28.646089] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:37.210 [2024-07-13 06:00:28.747937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.210 [2024-07-13 06:00:28.748050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.210 [2024-07-13 06:00:28.748059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.210 [2024-07-13 06:00:28.762943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:37.210 I/O targets: 00:13:37.210 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:37.210 00:13:37.210 00:13:37.210 CUnit - A unit testing framework for C - Version 2.1-3 00:13:37.210 http://cunit.sourceforge.net/ 00:13:37.210 00:13:37.210 00:13:37.210 Suite: bdevio tests on: Nvme1n1 00:13:37.210 Test: blockdev write read block ...passed 00:13:37.210 Test: blockdev write zeroes read block ...passed 00:13:37.210 Test: blockdev write zeroes read no split ...passed 00:13:37.210 Test: blockdev write zeroes read split ...passed 00:13:37.468 Test: blockdev write zeroes read split partial ...passed 00:13:37.468 Test: blockdev reset ...[2024-07-13 06:00:28.948781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:37.468 [2024-07-13 06:00:28.949157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd56a80 (9): Bad file descriptor 00:13:37.468 [2024-07-13 06:00:28.967816] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:37.468 passed 00:13:37.468 Test: blockdev write read 8 blocks ...passed 00:13:37.468 Test: blockdev write read size > 128k ...passed 00:13:37.468 Test: blockdev write read invalid size ...passed 00:13:37.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:37.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:37.468 Test: blockdev write read max offset ...passed 00:13:37.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:37.468 Test: blockdev writev readv 8 blocks ...passed 00:13:37.468 Test: blockdev writev readv 30 x 1block ...passed 00:13:37.468 Test: blockdev writev readv block ...passed 00:13:37.468 Test: blockdev writev readv size > 128k ...passed 00:13:37.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:37.468 Test: blockdev comparev and writev ...[2024-07-13 06:00:28.976255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.468 [2024-07-13 06:00:28.976469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:37.468 [2024-07-13 06:00:28.976500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.468 [2024-07-13 06:00:28.976512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:37.468 [2024-07-13 06:00:28.976842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.468 [2024-07-13 06:00:28.976860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:37.468 [2024-07-13 06:00:28.976876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.468 [2024-07-13 06:00:28.976887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:37.468 [2024-07-13 06:00:28.977159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.468 [2024-07-13 06:00:28.977182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:37.468 [2024-07-13 06:00:28.977200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.468 [2024-07-13 06:00:28.977211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:37.468 [2024-07-13 06:00:28.977495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.468 [2024-07-13 06:00:28.977533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:37.468 [2024-07-13 06:00:28.977562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.468 [2024-07-13 06:00:28.977580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:37.468 passed 00:13:37.468 Test: blockdev nvme passthru rw ...passed 00:13:37.468 Test: blockdev nvme passthru vendor specific ...[2024-07-13 06:00:28.978637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:37.468 [2024-07-13 06:00:28.978671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:37.469 [2024-07-13 06:00:28.978792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:37.469 [2024-07-13 06:00:28.978809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:37.469 passed 00:13:37.469 Test: blockdev nvme admin passthru ...[2024-07-13 06:00:28.978905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:37.469 [2024-07-13 06:00:28.978928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:37.469 [2024-07-13 06:00:28.979032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:37.469 [2024-07-13 06:00:28.979049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:37.469 passed 00:13:37.469 Test: blockdev copy ...passed 00:13:37.469 00:13:37.469 Run Summary: Type Total Ran Passed Failed Inactive 00:13:37.469 suites 1 1 n/a 0 0 00:13:37.469 tests 23 23 23 0 0 00:13:37.469 asserts 152 152 152 0 n/a 00:13:37.469 00:13:37.469 Elapsed time = 0.167 seconds 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:37.727 rmmod nvme_tcp 00:13:37.727 rmmod nvme_fabrics 00:13:37.727 rmmod nvme_keyring 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 84075 ']' 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 84075 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 84075 ']' 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 84075 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84075 00:13:37.727 killing process with pid 84075 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84075' 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 84075 00:13:37.727 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 84075 00:13:37.984 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:37.984 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:37.984 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:37.984 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.984 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:37.984 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.984 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.984 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.243 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:38.243 00:13:38.243 real 0m2.888s 00:13:38.243 user 0m9.312s 00:13:38.243 sys 0m1.104s 00:13:38.243 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:38.243 ************************************ 00:13:38.243 END TEST nvmf_bdevio_no_huge 00:13:38.243 ************************************ 00:13:38.243 06:00:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:38.243 06:00:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:38.243 06:00:29 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:38.243 06:00:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:38.243 06:00:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.243 06:00:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:38.243 ************************************ 00:13:38.243 START TEST nvmf_tls 00:13:38.243 ************************************ 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:38.243 * Looking for test storage... 00:13:38.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.243 06:00:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:38.244 Cannot find device "nvmf_tgt_br" 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:38.244 Cannot find device "nvmf_tgt_br2" 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:38.244 Cannot find device "nvmf_tgt_br" 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:38.244 Cannot find device "nvmf_tgt_br2" 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:38.244 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:38.502 06:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:38.502 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:38.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:38.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:38.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:13:38.503 00:13:38.503 --- 10.0.0.2 ping statistics --- 00:13:38.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.503 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:38.503 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:38.503 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:13:38.503 00:13:38.503 --- 10.0.0.3 ping statistics --- 00:13:38.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.503 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:38.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:13:38.503 00:13:38.503 --- 10.0.0.1 ping statistics --- 00:13:38.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.503 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:38.503 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:38.761 06:00:30 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:38.761 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:38.761 06:00:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:38.761 06:00:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.761 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84290 00:13:38.761 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:38.761 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84290 00:13:38.761 06:00:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84290 ']' 00:13:38.761 06:00:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.761 06:00:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:38.762 06:00:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.762 06:00:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:38.762 06:00:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.762 [2024-07-13 06:00:30.289202] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:38.762 [2024-07-13 06:00:30.289321] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.762 [2024-07-13 06:00:30.428238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.762 [2024-07-13 06:00:30.471603] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.762 [2024-07-13 06:00:30.471668] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.762 [2024-07-13 06:00:30.471694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.762 [2024-07-13 06:00:30.471704] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.762 [2024-07-13 06:00:30.471712] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.762 [2024-07-13 06:00:30.471748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.019 06:00:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.019 06:00:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:39.019 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.019 06:00:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:39.019 06:00:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 06:00:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.019 06:00:30 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:39.019 06:00:30 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:39.278 true 00:13:39.278 06:00:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:39.278 06:00:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:39.565 06:00:31 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:39.565 06:00:31 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:39.565 06:00:31 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:39.850 06:00:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:39.850 06:00:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:40.112 06:00:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:40.112 06:00:31 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:40.112 06:00:31 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:40.371 06:00:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:40.371 06:00:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:40.629 06:00:32 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:40.629 06:00:32 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:40.629 06:00:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:40.629 06:00:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:40.888 06:00:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:40.888 06:00:32 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:40.888 06:00:32 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:41.168 06:00:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:41.168 06:00:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:41.168 06:00:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:41.168 06:00:32 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:41.168 06:00:32 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:41.426 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:41.426 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:41.683 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:41.683 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:41.683 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:41.683 06:00:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:41.683 06:00:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:41.683 06:00:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.IbEXrZAizF 00:13:41.684 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:41.946 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.cqQ33I44lj 00:13:41.946 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:41.946 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:41.946 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.IbEXrZAizF 00:13:41.946 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.cqQ33I44lj 00:13:41.946 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:42.204 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:42.205 [2024-07-13 06:00:33.893664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:42.462 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.IbEXrZAizF 00:13:42.462 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IbEXrZAizF 00:13:42.462 06:00:33 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:42.462 [2024-07-13 06:00:34.178794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.719 06:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:42.719 06:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:42.976 [2024-07-13 06:00:34.634919] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:42.976 [2024-07-13 06:00:34.635188] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.976 06:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:43.235 malloc0 00:13:43.235 06:00:34 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:43.493 06:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IbEXrZAizF 00:13:43.751 [2024-07-13 06:00:35.232601] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:43.751 06:00:35 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.IbEXrZAizF 00:13:53.719 Initializing NVMe Controllers 00:13:53.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:53.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:53.719 Initialization complete. Launching workers. 00:13:53.719 ======================================================== 00:13:53.720 Latency(us) 00:13:53.720 Device Information : IOPS MiB/s Average min max 00:13:53.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10577.85 41.32 6051.73 1349.69 8327.01 00:13:53.720 ======================================================== 00:13:53.720 Total : 10577.85 41.32 6051.73 1349.69 8327.01 00:13:53.720 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IbEXrZAizF 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IbEXrZAizF' 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84515 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84515 /var/tmp/bdevperf.sock 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84515 ']' 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.720 06:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.988 [2024-07-13 06:00:45.487181] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:53.988 [2024-07-13 06:00:45.487488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84515 ] 00:13:53.988 [2024-07-13 06:00:45.625636] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.988 [2024-07-13 06:00:45.666588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.988 [2024-07-13 06:00:45.699493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:54.249 06:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.249 06:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:54.249 06:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IbEXrZAizF 00:13:54.249 [2024-07-13 06:00:45.928109] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:54.249 [2024-07-13 06:00:45.928456] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:54.506 TLSTESTn1 00:13:54.506 06:00:46 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:54.506 Running I/O for 10 seconds... 00:14:04.478 00:14:04.478 Latency(us) 00:14:04.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.478 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:04.478 Verification LBA range: start 0x0 length 0x2000 00:14:04.478 TLSTESTn1 : 10.02 4239.16 16.56 0.00 0.00 30135.85 9711.24 20375.74 00:14:04.478 =================================================================================================================== 00:14:04.478 Total : 4239.16 16.56 0.00 0.00 30135.85 9711.24 20375.74 00:14:04.478 0 00:14:04.478 06:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:04.478 06:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84515 00:14:04.478 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84515 ']' 00:14:04.478 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84515 00:14:04.478 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:04.478 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:04.478 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84515 00:14:04.478 killing process with pid 84515 00:14:04.478 Received shutdown signal, test time was about 10.000000 seconds 00:14:04.478 00:14:04.478 Latency(us) 00:14:04.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.478 =================================================================================================================== 00:14:04.478 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:04.478 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:04.478 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:04.478 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84515' 00:14:04.478 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84515 00:14:04.478 [2024-07-13 06:00:56.163174] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:04.478 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84515 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cqQ33I44lj 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cqQ33I44lj 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cqQ33I44lj 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cqQ33I44lj' 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84636 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84636 /var/tmp/bdevperf.sock 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84636 ']' 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:04.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.736 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.736 [2024-07-13 06:00:56.351440] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:04.736 [2024-07-13 06:00:56.351708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84636 ] 00:14:04.994 [2024-07-13 06:00:56.481185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.994 [2024-07-13 06:00:56.516986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.994 [2024-07-13 06:00:56.545902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:04.994 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.994 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:04.994 06:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cqQ33I44lj 00:14:05.252 [2024-07-13 06:00:56.896966] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:05.252 [2024-07-13 06:00:56.897328] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:05.252 [2024-07-13 06:00:56.904084] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:05.252 [2024-07-13 06:00:56.904994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b22d0 (107): Transport endpoint is not connected 00:14:05.252 [2024-07-13 06:00:56.906002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b22d0 (9): Bad file descriptor 00:14:05.252 [2024-07-13 06:00:56.906982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:05.252 [2024-07-13 06:00:56.907171] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:05.252 [2024-07-13 06:00:56.907324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:05.252 request: 00:14:05.253 { 00:14:05.253 "name": "TLSTEST", 00:14:05.253 "trtype": "tcp", 00:14:05.253 "traddr": "10.0.0.2", 00:14:05.253 "adrfam": "ipv4", 00:14:05.253 "trsvcid": "4420", 00:14:05.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:05.253 "prchk_reftag": false, 00:14:05.253 "prchk_guard": false, 00:14:05.253 "hdgst": false, 00:14:05.253 "ddgst": false, 00:14:05.253 "psk": "/tmp/tmp.cqQ33I44lj", 00:14:05.253 "method": "bdev_nvme_attach_controller", 00:14:05.253 "req_id": 1 00:14:05.253 } 00:14:05.253 Got JSON-RPC error response 00:14:05.253 response: 00:14:05.253 { 00:14:05.253 "code": -5, 00:14:05.253 "message": "Input/output error" 00:14:05.253 } 00:14:05.253 06:00:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84636 00:14:05.253 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84636 ']' 00:14:05.253 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84636 00:14:05.253 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:05.253 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.253 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84636 00:14:05.253 killing process with pid 84636 00:14:05.253 Received shutdown signal, test time was about 10.000000 seconds 00:14:05.253 00:14:05.253 Latency(us) 00:14:05.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.253 =================================================================================================================== 00:14:05.253 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:05.253 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:05.253 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:05.253 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84636' 00:14:05.253 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84636 00:14:05.253 [2024-07-13 06:00:56.955773] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:05.253 06:00:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84636 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IbEXrZAizF 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IbEXrZAizF 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IbEXrZAizF 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IbEXrZAizF' 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84650 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84650 /var/tmp/bdevperf.sock 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84650 ']' 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:05.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:05.512 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.512 [2024-07-13 06:00:57.126917] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:05.512 [2024-07-13 06:00:57.127130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84650 ] 00:14:05.771 [2024-07-13 06:00:57.262746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.771 [2024-07-13 06:00:57.296886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.771 [2024-07-13 06:00:57.325112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:05.771 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.771 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:05.771 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.IbEXrZAizF 00:14:06.028 [2024-07-13 06:00:57.576596] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:06.028 [2024-07-13 06:00:57.576953] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:06.028 [2024-07-13 06:00:57.584009] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:06.028 [2024-07-13 06:00:57.584257] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:06.028 [2024-07-13 06:00:57.584530] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:06.028 [2024-07-13 06:00:57.584675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5392d0 (107): Transport endpoint is not connected 00:14:06.028 [2024-07-13 06:00:57.585667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5392d0 (9): Bad file descriptor 00:14:06.028 [2024-07-13 06:00:57.586662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:06.028 [2024-07-13 06:00:57.586832] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:06.028 [2024-07-13 06:00:57.586940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:06.028 request: 00:14:06.028 { 00:14:06.028 "name": "TLSTEST", 00:14:06.028 "trtype": "tcp", 00:14:06.028 "traddr": "10.0.0.2", 00:14:06.028 "adrfam": "ipv4", 00:14:06.028 "trsvcid": "4420", 00:14:06.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.028 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:06.028 "prchk_reftag": false, 00:14:06.028 "prchk_guard": false, 00:14:06.028 "hdgst": false, 00:14:06.028 "ddgst": false, 00:14:06.029 "psk": "/tmp/tmp.IbEXrZAizF", 00:14:06.029 "method": "bdev_nvme_attach_controller", 00:14:06.029 "req_id": 1 00:14:06.029 } 00:14:06.029 Got JSON-RPC error response 00:14:06.029 response: 00:14:06.029 { 00:14:06.029 "code": -5, 00:14:06.029 "message": "Input/output error" 00:14:06.029 } 00:14:06.029 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84650 00:14:06.029 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84650 ']' 00:14:06.029 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84650 00:14:06.029 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:06.029 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.029 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84650 00:14:06.029 killing process with pid 84650 00:14:06.029 Received shutdown signal, test time was about 10.000000 seconds 00:14:06.029 00:14:06.029 Latency(us) 00:14:06.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.029 =================================================================================================================== 00:14:06.029 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:06.029 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:06.029 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:06.029 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84650' 00:14:06.029 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84650 00:14:06.029 [2024-07-13 06:00:57.624607] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:06.029 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84650 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IbEXrZAizF 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IbEXrZAizF 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IbEXrZAizF 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IbEXrZAizF' 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84669 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84669 /var/tmp/bdevperf.sock 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84669 ']' 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.287 06:00:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.287 [2024-07-13 06:00:57.808351] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:06.287 [2024-07-13 06:00:57.808469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84669 ] 00:14:06.287 [2024-07-13 06:00:57.938107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.287 [2024-07-13 06:00:57.971293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.287 [2024-07-13 06:00:57.998712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:06.544 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.544 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:06.544 06:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IbEXrZAizF 00:14:06.544 [2024-07-13 06:00:58.249025] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:06.544 [2024-07-13 06:00:58.249323] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:06.544 [2024-07-13 06:00:58.254264] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:06.544 [2024-07-13 06:00:58.254555] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:06.544 [2024-07-13 06:00:58.254721] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:06.544 [2024-07-13 06:00:58.254960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11072d0 (107): Transport endpoint is not connected 00:14:06.544 [2024-07-13 06:00:58.255948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11072d0 (9): Bad file descriptor 00:14:06.544 [2024-07-13 06:00:58.256944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:06.544 [2024-07-13 06:00:58.256984] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:06.544 [2024-07-13 06:00:58.256997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:06.544 request: 00:14:06.544 { 00:14:06.544 "name": "TLSTEST", 00:14:06.544 "trtype": "tcp", 00:14:06.544 "traddr": "10.0.0.2", 00:14:06.544 "adrfam": "ipv4", 00:14:06.545 "trsvcid": "4420", 00:14:06.545 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:06.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:06.545 "prchk_reftag": false, 00:14:06.545 "prchk_guard": false, 00:14:06.545 "hdgst": false, 00:14:06.545 "ddgst": false, 00:14:06.545 "psk": "/tmp/tmp.IbEXrZAizF", 00:14:06.545 "method": "bdev_nvme_attach_controller", 00:14:06.545 "req_id": 1 00:14:06.545 } 00:14:06.545 Got JSON-RPC error response 00:14:06.545 response: 00:14:06.545 { 00:14:06.545 "code": -5, 00:14:06.545 "message": "Input/output error" 00:14:06.545 } 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84669 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84669 ']' 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84669 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84669 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84669' 00:14:06.802 killing process with pid 84669 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84669 00:14:06.802 Received shutdown signal, test time was about 10.000000 seconds 00:14:06.802 00:14:06.802 Latency(us) 00:14:06.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.802 =================================================================================================================== 00:14:06.802 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:06.802 [2024-07-13 06:00:58.300169] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84669 00:14:06.802 scheduled for removal in v24.09 hit 1 times 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84685 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:06.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84685 /var/tmp/bdevperf.sock 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84685 ']' 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.802 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.802 [2024-07-13 06:00:58.484217] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:06.802 [2024-07-13 06:00:58.484344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84685 ] 00:14:07.060 [2024-07-13 06:00:58.615722] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.060 [2024-07-13 06:00:58.650247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.060 [2024-07-13 06:00:58.680319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:07.060 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.060 06:00:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:07.061 06:00:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:07.319 [2024-07-13 06:00:58.984419] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:07.319 [2024-07-13 06:00:58.986454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10646b0 (9): Bad file descriptor 00:14:07.319 [2024-07-13 06:00:58.987465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:07.319 [2024-07-13 06:00:58.987653] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:07.319 [2024-07-13 06:00:58.987775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:07.319 request: 00:14:07.319 { 00:14:07.319 "name": "TLSTEST", 00:14:07.319 "trtype": "tcp", 00:14:07.319 "traddr": "10.0.0.2", 00:14:07.319 "adrfam": "ipv4", 00:14:07.319 "trsvcid": "4420", 00:14:07.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:07.319 "prchk_reftag": false, 00:14:07.319 "prchk_guard": false, 00:14:07.319 "hdgst": false, 00:14:07.319 "ddgst": false, 00:14:07.319 "method": "bdev_nvme_attach_controller", 00:14:07.319 "req_id": 1 00:14:07.319 } 00:14:07.319 Got JSON-RPC error response 00:14:07.319 response: 00:14:07.319 { 00:14:07.319 "code": -5, 00:14:07.319 "message": "Input/output error" 00:14:07.319 } 00:14:07.319 06:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84685 00:14:07.319 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84685 ']' 00:14:07.319 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84685 00:14:07.319 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:07.319 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.319 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84685 00:14:07.319 killing process with pid 84685 00:14:07.319 Received shutdown signal, test time was about 10.000000 seconds 00:14:07.319 00:14:07.319 Latency(us) 00:14:07.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.319 =================================================================================================================== 00:14:07.319 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:07.319 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:07.319 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:07.319 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84685' 00:14:07.319 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84685 00:14:07.319 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84685 00:14:07.577 06:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:07.577 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:07.577 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:07.577 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:07.577 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:07.578 06:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 84290 00:14:07.578 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84290 ']' 00:14:07.578 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84290 00:14:07.578 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:07.578 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.578 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84290 00:14:07.578 killing process with pid 84290 00:14:07.578 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:07.578 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:07.578 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84290' 00:14:07.578 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84290 00:14:07.578 [2024-07-13 06:00:59.207325] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:07.578 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84290 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ceMYtF1fqw 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ceMYtF1fqw 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84715 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84715 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84715 ']' 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.835 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.835 [2024-07-13 06:00:59.499842] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:07.835 [2024-07-13 06:00:59.499950] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.093 [2024-07-13 06:00:59.643766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.093 [2024-07-13 06:00:59.685100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.093 [2024-07-13 06:00:59.685153] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.093 [2024-07-13 06:00:59.685181] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.093 [2024-07-13 06:00:59.685190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.093 [2024-07-13 06:00:59.685197] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.093 [2024-07-13 06:00:59.685239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.093 [2024-07-13 06:00:59.720153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:08.093 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.093 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:08.093 06:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.093 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:08.093 06:00:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.093 06:00:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.093 06:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ceMYtF1fqw 00:14:08.093 06:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ceMYtF1fqw 00:14:08.093 06:00:59 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:08.350 [2024-07-13 06:01:00.057645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.351 06:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:08.916 06:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:08.916 [2024-07-13 06:01:00.573787] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:08.916 [2024-07-13 06:01:00.574047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.916 06:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:09.173 malloc0 00:14:09.173 06:01:00 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:09.431 06:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ceMYtF1fqw 00:14:09.687 [2024-07-13 06:01:01.287980] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:09.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ceMYtF1fqw 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ceMYtF1fqw' 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84762 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84762 /var/tmp/bdevperf.sock 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84762 ']' 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:09.687 06:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.687 [2024-07-13 06:01:01.354409] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:09.688 [2024-07-13 06:01:01.354674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84762 ] 00:14:09.945 [2024-07-13 06:01:01.488677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.945 [2024-07-13 06:01:01.524257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.945 [2024-07-13 06:01:01.555185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:09.945 06:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.945 06:01:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:09.945 06:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ceMYtF1fqw 00:14:10.202 [2024-07-13 06:01:01.788011] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:10.203 [2024-07-13 06:01:01.788401] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:10.203 TLSTESTn1 00:14:10.203 06:01:01 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:10.460 Running I/O for 10 seconds... 00:14:20.454 00:14:20.454 Latency(us) 00:14:20.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.454 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:20.454 Verification LBA range: start 0x0 length 0x2000 00:14:20.454 TLSTESTn1 : 10.02 3774.77 14.75 0.00 0.00 33841.12 7030.23 36938.47 00:14:20.454 =================================================================================================================== 00:14:20.454 Total : 3774.77 14.75 0.00 0.00 33841.12 7030.23 36938.47 00:14:20.454 0 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84762 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84762 ']' 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84762 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84762 00:14:20.454 killing process with pid 84762 00:14:20.454 Received shutdown signal, test time was about 10.000000 seconds 00:14:20.454 00:14:20.454 Latency(us) 00:14:20.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.454 =================================================================================================================== 00:14:20.454 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84762' 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84762 00:14:20.454 [2024-07-13 06:01:12.037356] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84762 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ceMYtF1fqw 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ceMYtF1fqw 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ceMYtF1fqw 00:14:20.454 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ceMYtF1fqw 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ceMYtF1fqw' 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84878 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84878 /var/tmp/bdevperf.sock 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84878 ']' 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:20.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.712 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.712 [2024-07-13 06:01:12.236983] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:20.712 [2024-07-13 06:01:12.237273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84878 ] 00:14:20.712 [2024-07-13 06:01:12.372070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.712 [2024-07-13 06:01:12.407696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.712 [2024-07-13 06:01:12.436750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:20.969 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.969 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:20.969 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ceMYtF1fqw 00:14:21.228 [2024-07-13 06:01:12.735581] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:21.228 [2024-07-13 06:01:12.735880] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:21.228 [2024-07-13 06:01:12.736001] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ceMYtF1fqw 00:14:21.228 request: 00:14:21.228 { 00:14:21.228 "name": "TLSTEST", 00:14:21.228 "trtype": "tcp", 00:14:21.228 "traddr": "10.0.0.2", 00:14:21.228 "adrfam": "ipv4", 00:14:21.228 "trsvcid": "4420", 00:14:21.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.228 "prchk_reftag": false, 00:14:21.228 "prchk_guard": false, 00:14:21.228 "hdgst": false, 00:14:21.228 "ddgst": false, 00:14:21.228 "psk": "/tmp/tmp.ceMYtF1fqw", 00:14:21.228 "method": "bdev_nvme_attach_controller", 00:14:21.228 "req_id": 1 00:14:21.228 } 00:14:21.228 Got JSON-RPC error response 00:14:21.228 response: 00:14:21.228 { 00:14:21.228 "code": -1, 00:14:21.228 "message": "Operation not permitted" 00:14:21.228 } 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84878 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84878 ']' 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84878 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84878 00:14:21.228 killing process with pid 84878 00:14:21.228 Received shutdown signal, test time was about 10.000000 seconds 00:14:21.228 00:14:21.228 Latency(us) 00:14:21.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.228 =================================================================================================================== 00:14:21.228 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84878' 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84878 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84878 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 84715 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84715 ']' 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84715 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84715 00:14:21.228 killing process with pid 84715 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84715' 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84715 00:14:21.228 [2024-07-13 06:01:12.938562] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:21.228 06:01:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84715 00:14:21.487 06:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:21.487 06:01:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.487 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:21.487 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.487 06:01:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84903 00:14:21.487 06:01:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:21.487 06:01:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84903 00:14:21.487 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84903 ']' 00:14:21.487 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.487 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.487 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.487 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.487 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.487 [2024-07-13 06:01:13.150473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:21.487 [2024-07-13 06:01:13.150781] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.744 [2024-07-13 06:01:13.287186] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.744 [2024-07-13 06:01:13.323939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.744 [2024-07-13 06:01:13.324181] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.745 [2024-07-13 06:01:13.324322] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.745 [2024-07-13 06:01:13.324488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.745 [2024-07-13 06:01:13.324532] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.745 [2024-07-13 06:01:13.324589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.745 [2024-07-13 06:01:13.355587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ceMYtF1fqw 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ceMYtF1fqw 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.ceMYtF1fqw 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ceMYtF1fqw 00:14:21.745 06:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:22.002 [2024-07-13 06:01:13.707957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.002 06:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:22.260 06:01:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:22.518 [2024-07-13 06:01:14.136031] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:22.518 [2024-07-13 06:01:14.136243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.518 06:01:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:22.777 malloc0 00:14:22.777 06:01:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:23.035 06:01:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ceMYtF1fqw 00:14:23.294 [2024-07-13 06:01:14.895626] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:23.294 [2024-07-13 06:01:14.895674] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:23.294 [2024-07-13 06:01:14.895704] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:23.294 request: 00:14:23.294 { 00:14:23.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.294 "host": "nqn.2016-06.io.spdk:host1", 00:14:23.294 "psk": "/tmp/tmp.ceMYtF1fqw", 00:14:23.294 "method": "nvmf_subsystem_add_host", 00:14:23.294 "req_id": 1 00:14:23.294 } 00:14:23.294 Got JSON-RPC error response 00:14:23.294 response: 00:14:23.294 { 00:14:23.294 "code": -32603, 00:14:23.294 "message": "Internal error" 00:14:23.294 } 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84903 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84903 ']' 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84903 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84903 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84903' 00:14:23.294 killing process with pid 84903 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84903 00:14:23.294 06:01:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84903 00:14:23.565 06:01:15 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ceMYtF1fqw 00:14:23.565 06:01:15 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:23.565 06:01:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.565 06:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:23.565 06:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.565 06:01:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84958 00:14:23.565 06:01:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:23.565 06:01:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84958 00:14:23.565 06:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84958 ']' 00:14:23.565 06:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.565 06:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.565 06:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.565 06:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.565 06:01:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.565 [2024-07-13 06:01:15.164638] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:23.565 [2024-07-13 06:01:15.165435] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.840 [2024-07-13 06:01:15.312611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.840 [2024-07-13 06:01:15.360661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.840 [2024-07-13 06:01:15.361056] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.840 [2024-07-13 06:01:15.361566] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.840 [2024-07-13 06:01:15.361936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.840 [2024-07-13 06:01:15.362258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.840 [2024-07-13 06:01:15.362669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.840 [2024-07-13 06:01:15.395192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:24.778 06:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:24.778 06:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:24.778 06:01:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:24.778 06:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:24.778 06:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.778 06:01:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.778 06:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ceMYtF1fqw 00:14:24.778 06:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ceMYtF1fqw 00:14:24.778 06:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:24.778 [2024-07-13 06:01:16.463594] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.778 06:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:25.343 06:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:25.343 [2024-07-13 06:01:16.979748] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:25.343 [2024-07-13 06:01:16.979965] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.343 06:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:25.601 malloc0 00:14:25.601 06:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:25.858 06:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ceMYtF1fqw 00:14:26.115 [2024-07-13 06:01:17.706277] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:26.115 06:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:26.115 06:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=85013 00:14:26.115 06:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:26.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:26.115 06:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 85013 /var/tmp/bdevperf.sock 00:14:26.115 06:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85013 ']' 00:14:26.115 06:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:26.115 06:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.115 06:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:26.115 06:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.115 06:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.115 [2024-07-13 06:01:17.769215] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:26.115 [2024-07-13 06:01:17.769500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85013 ] 00:14:26.372 [2024-07-13 06:01:17.906682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.372 [2024-07-13 06:01:17.949787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.372 [2024-07-13 06:01:17.984939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:26.372 06:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:26.372 06:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:26.372 06:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ceMYtF1fqw 00:14:26.630 [2024-07-13 06:01:18.269788] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:26.630 [2024-07-13 06:01:18.269900] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:26.630 TLSTESTn1 00:14:26.889 06:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:27.148 06:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:27.148 "subsystems": [ 00:14:27.148 { 00:14:27.148 "subsystem": "keyring", 00:14:27.148 "config": [] 00:14:27.148 }, 00:14:27.148 { 00:14:27.148 "subsystem": "iobuf", 00:14:27.148 "config": [ 00:14:27.148 { 00:14:27.148 "method": "iobuf_set_options", 00:14:27.148 "params": { 00:14:27.148 "small_pool_count": 8192, 00:14:27.148 "large_pool_count": 1024, 00:14:27.148 "small_bufsize": 8192, 00:14:27.148 "large_bufsize": 135168 00:14:27.148 } 00:14:27.148 } 00:14:27.148 ] 00:14:27.148 }, 00:14:27.148 { 00:14:27.148 "subsystem": "sock", 00:14:27.148 "config": [ 00:14:27.148 { 00:14:27.148 "method": "sock_set_default_impl", 00:14:27.148 "params": { 00:14:27.148 "impl_name": "uring" 00:14:27.148 } 00:14:27.148 }, 00:14:27.148 { 00:14:27.148 "method": "sock_impl_set_options", 00:14:27.148 "params": { 00:14:27.148 "impl_name": "ssl", 00:14:27.148 "recv_buf_size": 4096, 00:14:27.148 "send_buf_size": 4096, 00:14:27.148 "enable_recv_pipe": true, 00:14:27.148 "enable_quickack": false, 00:14:27.148 "enable_placement_id": 0, 00:14:27.148 "enable_zerocopy_send_server": true, 00:14:27.148 "enable_zerocopy_send_client": false, 00:14:27.148 "zerocopy_threshold": 0, 00:14:27.148 "tls_version": 0, 00:14:27.148 "enable_ktls": false 00:14:27.148 } 00:14:27.148 }, 00:14:27.148 { 00:14:27.148 "method": "sock_impl_set_options", 00:14:27.148 "params": { 00:14:27.148 "impl_name": "posix", 00:14:27.148 "recv_buf_size": 2097152, 00:14:27.148 "send_buf_size": 2097152, 00:14:27.148 "enable_recv_pipe": true, 00:14:27.148 "enable_quickack": false, 00:14:27.148 "enable_placement_id": 0, 00:14:27.148 "enable_zerocopy_send_server": true, 00:14:27.148 "enable_zerocopy_send_client": false, 00:14:27.148 "zerocopy_threshold": 0, 00:14:27.148 "tls_version": 0, 00:14:27.148 "enable_ktls": false 00:14:27.148 } 00:14:27.148 }, 00:14:27.148 { 00:14:27.148 "method": "sock_impl_set_options", 00:14:27.148 "params": { 00:14:27.148 "impl_name": "uring", 00:14:27.148 "recv_buf_size": 2097152, 00:14:27.148 "send_buf_size": 2097152, 00:14:27.148 "enable_recv_pipe": true, 00:14:27.148 "enable_quickack": false, 00:14:27.148 "enable_placement_id": 0, 00:14:27.148 "enable_zerocopy_send_server": false, 00:14:27.148 "enable_zerocopy_send_client": false, 00:14:27.148 "zerocopy_threshold": 0, 00:14:27.148 "tls_version": 0, 00:14:27.148 "enable_ktls": false 00:14:27.148 } 00:14:27.148 } 00:14:27.148 ] 00:14:27.148 }, 00:14:27.148 { 00:14:27.148 "subsystem": "vmd", 00:14:27.148 "config": [] 00:14:27.148 }, 00:14:27.148 { 00:14:27.148 "subsystem": "accel", 00:14:27.148 "config": [ 00:14:27.148 { 00:14:27.148 "method": "accel_set_options", 00:14:27.148 "params": { 00:14:27.148 "small_cache_size": 128, 00:14:27.148 "large_cache_size": 16, 00:14:27.148 "task_count": 2048, 00:14:27.148 "sequence_count": 2048, 00:14:27.148 "buf_count": 2048 00:14:27.148 } 00:14:27.148 } 00:14:27.148 ] 00:14:27.148 }, 00:14:27.148 { 00:14:27.148 "subsystem": "bdev", 00:14:27.148 "config": [ 00:14:27.148 { 00:14:27.148 "method": "bdev_set_options", 00:14:27.148 "params": { 00:14:27.148 "bdev_io_pool_size": 65535, 00:14:27.148 "bdev_io_cache_size": 256, 00:14:27.148 "bdev_auto_examine": true, 00:14:27.148 "iobuf_small_cache_size": 128, 00:14:27.148 "iobuf_large_cache_size": 16 00:14:27.148 } 00:14:27.148 }, 00:14:27.148 { 00:14:27.148 "method": "bdev_raid_set_options", 00:14:27.148 "params": { 00:14:27.148 "process_window_size_kb": 1024 00:14:27.148 } 00:14:27.148 }, 00:14:27.148 { 00:14:27.148 "method": "bdev_iscsi_set_options", 00:14:27.148 "params": { 00:14:27.148 "timeout_sec": 30 00:14:27.148 } 00:14:27.148 }, 00:14:27.148 { 00:14:27.148 "method": "bdev_nvme_set_options", 00:14:27.148 "params": { 00:14:27.148 "action_on_timeout": "none", 00:14:27.148 "timeout_us": 0, 00:14:27.148 "timeout_admin_us": 0, 00:14:27.148 "keep_alive_timeout_ms": 10000, 00:14:27.148 "arbitration_burst": 0, 00:14:27.148 "low_priority_weight": 0, 00:14:27.148 "medium_priority_weight": 0, 00:14:27.148 "high_priority_weight": 0, 00:14:27.148 "nvme_adminq_poll_period_us": 10000, 00:14:27.148 "nvme_ioq_poll_period_us": 0, 00:14:27.148 "io_queue_requests": 0, 00:14:27.148 "delay_cmd_submit": true, 00:14:27.148 "transport_retry_count": 4, 00:14:27.148 "bdev_retry_count": 3, 00:14:27.148 "transport_ack_timeout": 0, 00:14:27.148 "ctrlr_loss_timeout_sec": 0, 00:14:27.148 "reconnect_delay_sec": 0, 00:14:27.148 "fast_io_fail_timeout_sec": 0, 00:14:27.148 "disable_auto_failback": false, 00:14:27.148 "generate_uuids": false, 00:14:27.148 "transport_tos": 0, 00:14:27.148 "nvme_error_stat": false, 00:14:27.148 "rdma_srq_size": 0, 00:14:27.148 "io_path_stat": false, 00:14:27.148 "allow_accel_sequence": false, 00:14:27.148 "rdma_max_cq_size": 0, 00:14:27.148 "rdma_cm_event_timeout_ms": 0, 00:14:27.148 "dhchap_digests": [ 00:14:27.148 "sha256", 00:14:27.148 "sha384", 00:14:27.148 "sha512" 00:14:27.148 ], 00:14:27.148 "dhchap_dhgroups": [ 00:14:27.148 "null", 00:14:27.148 "ffdhe2048", 00:14:27.148 "ffdhe3072", 00:14:27.148 "ffdhe4096", 00:14:27.148 "ffdhe6144", 00:14:27.148 "ffdhe8192" 00:14:27.148 ] 00:14:27.148 } 00:14:27.148 }, 00:14:27.148 { 00:14:27.148 "method": "bdev_nvme_set_hotplug", 00:14:27.148 "params": { 00:14:27.148 "period_us": 100000, 00:14:27.148 "enable": false 00:14:27.148 } 00:14:27.148 }, 00:14:27.149 { 00:14:27.149 "method": "bdev_malloc_create", 00:14:27.149 "params": { 00:14:27.149 "name": "malloc0", 00:14:27.149 "num_blocks": 8192, 00:14:27.149 "block_size": 4096, 00:14:27.149 "physical_block_size": 4096, 00:14:27.149 "uuid": "7b363178-8b35-4a51-9036-d94caee32f37", 00:14:27.149 "optimal_io_boundary": 0 00:14:27.149 } 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "method": "bdev_wait_for_examine" 00:14:27.149 } 00:14:27.149 ] 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "subsystem": "nbd", 00:14:27.149 "config": [] 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "subsystem": "scheduler", 00:14:27.149 "config": [ 00:14:27.149 { 00:14:27.149 "method": "framework_set_scheduler", 00:14:27.149 "params": { 00:14:27.149 "name": "static" 00:14:27.149 } 00:14:27.149 } 00:14:27.149 ] 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "subsystem": "nvmf", 00:14:27.149 "config": [ 00:14:27.149 { 00:14:27.149 "method": "nvmf_set_config", 00:14:27.149 "params": { 00:14:27.149 "discovery_filter": "match_any", 00:14:27.149 "admin_cmd_passthru": { 00:14:27.149 "identify_ctrlr": false 00:14:27.149 } 00:14:27.149 } 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "method": "nvmf_set_max_subsystems", 00:14:27.149 "params": { 00:14:27.149 "max_subsystems": 1024 00:14:27.149 } 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "method": "nvmf_set_crdt", 00:14:27.149 "params": { 00:14:27.149 "crdt1": 0, 00:14:27.149 "crdt2": 0, 00:14:27.149 "crdt3": 0 00:14:27.149 } 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "method": "nvmf_create_transport", 00:14:27.149 "params": { 00:14:27.149 "trtype": "TCP", 00:14:27.149 "max_queue_depth": 128, 00:14:27.149 "max_io_qpairs_per_ctrlr": 127, 00:14:27.149 "in_capsule_data_size": 4096, 00:14:27.149 "max_io_size": 131072, 00:14:27.149 "io_unit_size": 131072, 00:14:27.149 "max_aq_depth": 128, 00:14:27.149 "num_shared_buffers": 511, 00:14:27.149 "buf_cache_size": 4294967295, 00:14:27.149 "dif_insert_or_strip": false, 00:14:27.149 "zcopy": false, 00:14:27.149 "c2h_success": false, 00:14:27.149 "sock_priority": 0, 00:14:27.149 "abort_timeout_sec": 1, 00:14:27.149 "ack_timeout": 0, 00:14:27.149 "data_wr_pool_size": 0 00:14:27.149 } 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "method": "nvmf_create_subsystem", 00:14:27.149 "params": { 00:14:27.149 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.149 "allow_any_host": false, 00:14:27.149 "serial_number": "SPDK00000000000001", 00:14:27.149 "model_number": "SPDK bdev Controller", 00:14:27.149 "max_namespaces": 10, 00:14:27.149 "min_cntlid": 1, 00:14:27.149 "max_cntlid": 65519, 00:14:27.149 "ana_reporting": false 00:14:27.149 } 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "method": "nvmf_subsystem_add_host", 00:14:27.149 "params": { 00:14:27.149 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.149 "host": "nqn.2016-06.io.spdk:host1", 00:14:27.149 "psk": "/tmp/tmp.ceMYtF1fqw" 00:14:27.149 } 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "method": "nvmf_subsystem_add_ns", 00:14:27.149 "params": { 00:14:27.149 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.149 "namespace": { 00:14:27.149 "nsid": 1, 00:14:27.149 "bdev_name": "malloc0", 00:14:27.149 "nguid": "7B3631788B354A519036D94CAEE32F37", 00:14:27.149 "uuid": "7b363178-8b35-4a51-9036-d94caee32f37", 00:14:27.149 "no_auto_visible": false 00:14:27.149 } 00:14:27.149 } 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "method": "nvmf_subsystem_add_listener", 00:14:27.149 "params": { 00:14:27.149 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.149 "listen_address": { 00:14:27.149 "trtype": "TCP", 00:14:27.149 "adrfam": "IPv4", 00:14:27.149 "traddr": "10.0.0.2", 00:14:27.149 "trsvcid": "4420" 00:14:27.149 }, 00:14:27.149 "secure_channel": true 00:14:27.149 } 00:14:27.149 } 00:14:27.149 ] 00:14:27.149 } 00:14:27.149 ] 00:14:27.149 }' 00:14:27.149 06:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:27.407 06:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:27.407 "subsystems": [ 00:14:27.407 { 00:14:27.407 "subsystem": "keyring", 00:14:27.407 "config": [] 00:14:27.407 }, 00:14:27.407 { 00:14:27.407 "subsystem": "iobuf", 00:14:27.407 "config": [ 00:14:27.407 { 00:14:27.407 "method": "iobuf_set_options", 00:14:27.407 "params": { 00:14:27.407 "small_pool_count": 8192, 00:14:27.407 "large_pool_count": 1024, 00:14:27.407 "small_bufsize": 8192, 00:14:27.407 "large_bufsize": 135168 00:14:27.407 } 00:14:27.407 } 00:14:27.407 ] 00:14:27.407 }, 00:14:27.407 { 00:14:27.407 "subsystem": "sock", 00:14:27.407 "config": [ 00:14:27.407 { 00:14:27.407 "method": "sock_set_default_impl", 00:14:27.407 "params": { 00:14:27.407 "impl_name": "uring" 00:14:27.407 } 00:14:27.407 }, 00:14:27.407 { 00:14:27.407 "method": "sock_impl_set_options", 00:14:27.407 "params": { 00:14:27.407 "impl_name": "ssl", 00:14:27.407 "recv_buf_size": 4096, 00:14:27.407 "send_buf_size": 4096, 00:14:27.407 "enable_recv_pipe": true, 00:14:27.407 "enable_quickack": false, 00:14:27.407 "enable_placement_id": 0, 00:14:27.407 "enable_zerocopy_send_server": true, 00:14:27.407 "enable_zerocopy_send_client": false, 00:14:27.407 "zerocopy_threshold": 0, 00:14:27.407 "tls_version": 0, 00:14:27.407 "enable_ktls": false 00:14:27.407 } 00:14:27.407 }, 00:14:27.407 { 00:14:27.407 "method": "sock_impl_set_options", 00:14:27.407 "params": { 00:14:27.407 "impl_name": "posix", 00:14:27.407 "recv_buf_size": 2097152, 00:14:27.407 "send_buf_size": 2097152, 00:14:27.407 "enable_recv_pipe": true, 00:14:27.407 "enable_quickack": false, 00:14:27.407 "enable_placement_id": 0, 00:14:27.407 "enable_zerocopy_send_server": true, 00:14:27.407 "enable_zerocopy_send_client": false, 00:14:27.407 "zerocopy_threshold": 0, 00:14:27.407 "tls_version": 0, 00:14:27.407 "enable_ktls": false 00:14:27.407 } 00:14:27.407 }, 00:14:27.407 { 00:14:27.407 "method": "sock_impl_set_options", 00:14:27.407 "params": { 00:14:27.407 "impl_name": "uring", 00:14:27.407 "recv_buf_size": 2097152, 00:14:27.407 "send_buf_size": 2097152, 00:14:27.407 "enable_recv_pipe": true, 00:14:27.407 "enable_quickack": false, 00:14:27.407 "enable_placement_id": 0, 00:14:27.408 "enable_zerocopy_send_server": false, 00:14:27.408 "enable_zerocopy_send_client": false, 00:14:27.408 "zerocopy_threshold": 0, 00:14:27.408 "tls_version": 0, 00:14:27.408 "enable_ktls": false 00:14:27.408 } 00:14:27.408 } 00:14:27.408 ] 00:14:27.408 }, 00:14:27.408 { 00:14:27.408 "subsystem": "vmd", 00:14:27.408 "config": [] 00:14:27.408 }, 00:14:27.408 { 00:14:27.408 "subsystem": "accel", 00:14:27.408 "config": [ 00:14:27.408 { 00:14:27.408 "method": "accel_set_options", 00:14:27.408 "params": { 00:14:27.408 "small_cache_size": 128, 00:14:27.408 "large_cache_size": 16, 00:14:27.408 "task_count": 2048, 00:14:27.408 "sequence_count": 2048, 00:14:27.408 "buf_count": 2048 00:14:27.408 } 00:14:27.408 } 00:14:27.408 ] 00:14:27.408 }, 00:14:27.408 { 00:14:27.408 "subsystem": "bdev", 00:14:27.408 "config": [ 00:14:27.408 { 00:14:27.408 "method": "bdev_set_options", 00:14:27.408 "params": { 00:14:27.408 "bdev_io_pool_size": 65535, 00:14:27.408 "bdev_io_cache_size": 256, 00:14:27.408 "bdev_auto_examine": true, 00:14:27.408 "iobuf_small_cache_size": 128, 00:14:27.408 "iobuf_large_cache_size": 16 00:14:27.408 } 00:14:27.408 }, 00:14:27.408 { 00:14:27.408 "method": "bdev_raid_set_options", 00:14:27.408 "params": { 00:14:27.408 "process_window_size_kb": 1024 00:14:27.408 } 00:14:27.408 }, 00:14:27.408 { 00:14:27.408 "method": "bdev_iscsi_set_options", 00:14:27.408 "params": { 00:14:27.408 "timeout_sec": 30 00:14:27.408 } 00:14:27.408 }, 00:14:27.408 { 00:14:27.408 "method": "bdev_nvme_set_options", 00:14:27.408 "params": { 00:14:27.408 "action_on_timeout": "none", 00:14:27.408 "timeout_us": 0, 00:14:27.408 "timeout_admin_us": 0, 00:14:27.408 "keep_alive_timeout_ms": 10000, 00:14:27.408 "arbitration_burst": 0, 00:14:27.408 "low_priority_weight": 0, 00:14:27.408 "medium_priority_weight": 0, 00:14:27.408 "high_priority_weight": 0, 00:14:27.408 "nvme_adminq_poll_period_us": 10000, 00:14:27.408 "nvme_ioq_poll_period_us": 0, 00:14:27.408 "io_queue_requests": 512, 00:14:27.408 "delay_cmd_submit": true, 00:14:27.408 "transport_retry_count": 4, 00:14:27.408 "bdev_retry_count": 3, 00:14:27.408 "transport_ack_timeout": 0, 00:14:27.408 "ctrlr_loss_timeout_sec": 0, 00:14:27.408 "reconnect_delay_sec": 0, 00:14:27.408 "fast_io_fail_timeout_sec": 0, 00:14:27.408 "disable_auto_failback": false, 00:14:27.408 "generate_uuids": false, 00:14:27.408 "transport_tos": 0, 00:14:27.408 "nvme_error_stat": false, 00:14:27.408 "rdma_srq_size": 0, 00:14:27.408 "io_path_stat": false, 00:14:27.408 "allow_accel_sequence": false, 00:14:27.408 "rdma_max_cq_size": 0, 00:14:27.408 "rdma_cm_event_timeout_ms": 0, 00:14:27.408 "dhchap_digests": [ 00:14:27.408 "sha256", 00:14:27.408 "sha384", 00:14:27.408 "sha512" 00:14:27.408 ], 00:14:27.408 "dhchap_dhgroups": [ 00:14:27.408 "null", 00:14:27.408 "ffdhe2048", 00:14:27.408 "ffdhe3072", 00:14:27.408 "ffdhe4096", 00:14:27.408 "ffdhe6144", 00:14:27.408 "ffdhe8192" 00:14:27.408 ] 00:14:27.408 } 00:14:27.408 }, 00:14:27.408 { 00:14:27.408 "method": "bdev_nvme_attach_controller", 00:14:27.408 "params": { 00:14:27.408 "name": "TLSTEST", 00:14:27.408 "trtype": "TCP", 00:14:27.408 "adrfam": "IPv4", 00:14:27.408 "traddr": "10.0.0.2", 00:14:27.408 "trsvcid": "4420", 00:14:27.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.408 "prchk_reftag": false, 00:14:27.408 "prchk_guard": false, 00:14:27.408 "ctrlr_loss_timeout_sec": 0, 00:14:27.408 "reconnect_delay_sec": 0, 00:14:27.408 "fast_io_fail_timeout_sec": 0, 00:14:27.408 "psk": "/tmp/tmp.ceMYtF1fqw", 00:14:27.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:27.408 "hdgst": false, 00:14:27.408 "ddgst": false 00:14:27.408 } 00:14:27.408 }, 00:14:27.408 { 00:14:27.408 "method": "bdev_nvme_set_hotplug", 00:14:27.408 "params": { 00:14:27.408 "period_us": 100000, 00:14:27.408 "enable": false 00:14:27.408 } 00:14:27.408 }, 00:14:27.408 { 00:14:27.408 "method": "bdev_wait_for_examine" 00:14:27.408 } 00:14:27.408 ] 00:14:27.408 }, 00:14:27.408 { 00:14:27.408 "subsystem": "nbd", 00:14:27.408 "config": [] 00:14:27.408 } 00:14:27.408 ] 00:14:27.408 }' 00:14:27.408 06:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 85013 00:14:27.408 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85013 ']' 00:14:27.408 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85013 00:14:27.408 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:27.408 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:27.408 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85013 00:14:27.408 killing process with pid 85013 00:14:27.408 Received shutdown signal, test time was about 10.000000 seconds 00:14:27.408 00:14:27.408 Latency(us) 00:14:27.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.408 =================================================================================================================== 00:14:27.408 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:27.408 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:27.408 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:27.408 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85013' 00:14:27.408 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85013 00:14:27.408 [2024-07-13 06:01:19.036788] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:27.408 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85013 00:14:27.666 06:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84958 00:14:27.666 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84958 ']' 00:14:27.666 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84958 00:14:27.666 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:27.666 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:27.666 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84958 00:14:27.666 killing process with pid 84958 00:14:27.666 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:27.666 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:27.666 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84958' 00:14:27.666 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84958 00:14:27.666 [2024-07-13 06:01:19.212479] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:27.666 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84958 00:14:27.666 06:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:27.666 06:01:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:27.666 06:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:27.666 "subsystems": [ 00:14:27.666 { 00:14:27.666 "subsystem": "keyring", 00:14:27.667 "config": [] 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "subsystem": "iobuf", 00:14:27.667 "config": [ 00:14:27.667 { 00:14:27.667 "method": "iobuf_set_options", 00:14:27.667 "params": { 00:14:27.667 "small_pool_count": 8192, 00:14:27.667 "large_pool_count": 1024, 00:14:27.667 "small_bufsize": 8192, 00:14:27.667 "large_bufsize": 135168 00:14:27.667 } 00:14:27.667 } 00:14:27.667 ] 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "subsystem": "sock", 00:14:27.667 "config": [ 00:14:27.667 { 00:14:27.667 "method": "sock_set_default_impl", 00:14:27.667 "params": { 00:14:27.667 "impl_name": "uring" 00:14:27.667 } 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "method": "sock_impl_set_options", 00:14:27.667 "params": { 00:14:27.667 "impl_name": "ssl", 00:14:27.667 "recv_buf_size": 4096, 00:14:27.667 "send_buf_size": 4096, 00:14:27.667 "enable_recv_pipe": true, 00:14:27.667 "enable_quickack": false, 00:14:27.667 "enable_placement_id": 0, 00:14:27.667 "enable_zerocopy_send_server": true, 00:14:27.667 "enable_zerocopy_send_client": false, 00:14:27.667 "zerocopy_threshold": 0, 00:14:27.667 "tls_version": 0, 00:14:27.667 "enable_ktls": false 00:14:27.667 } 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "method": "sock_impl_set_options", 00:14:27.667 "params": { 00:14:27.667 "impl_name": "posix", 00:14:27.667 "recv_buf_size": 2097152, 00:14:27.667 "send_buf_size": 2097152, 00:14:27.667 "enable_recv_pipe": true, 00:14:27.667 "enable_quickack": false, 00:14:27.667 "enable_placement_id": 0, 00:14:27.667 "enable_zerocopy_send_server": true, 00:14:27.667 "enable_zerocopy_send_client": false, 00:14:27.667 "zerocopy_threshold": 0, 00:14:27.667 "tls_version": 0, 00:14:27.667 "enable_ktls": false 00:14:27.667 } 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "method": "sock_impl_set_options", 00:14:27.667 "params": { 00:14:27.667 "impl_name": "uring", 00:14:27.667 "recv_buf_size": 2097152, 00:14:27.667 "send_buf_size": 2097152, 00:14:27.667 "enable_recv_pipe": true, 00:14:27.667 "enable_quickack": false, 00:14:27.667 "enable_placement_id": 0, 00:14:27.667 "enable_zerocopy_send_server": false, 00:14:27.667 "enable_zerocopy_send_client": false, 00:14:27.667 "zerocopy_threshold": 0, 00:14:27.667 "tls_version": 0, 00:14:27.667 "enable_ktls": false 00:14:27.667 } 00:14:27.667 } 00:14:27.667 ] 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "subsystem": "vmd", 00:14:27.667 "config": [] 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "subsystem": "accel", 00:14:27.667 "config": [ 00:14:27.667 { 00:14:27.667 "method": "accel_set_options", 00:14:27.667 "params": { 00:14:27.667 "small_cache_size": 128, 00:14:27.667 "large_cache_size": 16, 00:14:27.667 "task_count": 2048, 00:14:27.667 "sequence_count": 2048, 00:14:27.667 "buf_count": 2048 00:14:27.667 } 00:14:27.667 } 00:14:27.667 ] 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "subsystem": "bdev", 00:14:27.667 "config": [ 00:14:27.667 { 00:14:27.667 "method": "bdev_set_options", 00:14:27.667 "params": { 00:14:27.667 "bdev_io_pool_size": 65535, 00:14:27.667 "bdev_io_cache_size": 256, 00:14:27.667 "bdev_auto_examine": true, 00:14:27.667 "iobuf_small_cache_size": 128, 00:14:27.667 "iobuf_large_cache_size": 16 00:14:27.667 } 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "method": "bdev_raid_set_options", 00:14:27.667 "params": { 00:14:27.667 "process_window_size_kb": 1024 00:14:27.667 } 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "method": "bdev_iscsi_set_options", 00:14:27.667 "params": { 00:14:27.667 "timeout_sec": 30 00:14:27.667 } 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "method": "bdev_nvme_set_options", 00:14:27.667 "params": { 00:14:27.667 "action_on_timeout": "none", 00:14:27.667 "timeout_us": 0, 00:14:27.667 "timeout_admin_us": 0, 00:14:27.667 "keep_alive_timeout_ms": 10000, 00:14:27.667 "arbitration_burst": 0, 00:14:27.667 "low_priority_weight": 0, 00:14:27.667 "medium_priority_weight": 0, 00:14:27.667 "high_priority_weight": 0, 00:14:27.667 "nvme_adminq_poll_period_us": 10000, 00:14:27.667 "nvme_ioq_poll_period_us": 0, 00:14:27.667 "io_queue_requests": 0, 00:14:27.667 "delay_cmd_submit": true, 00:14:27.667 "transport_retry_count": 4, 00:14:27.667 "bdev_retry_count": 3, 00:14:27.667 "transport_ack_timeout": 0, 00:14:27.667 "ctrlr_loss_timeout_sec": 0, 00:14:27.667 "reconnect_delay_sec": 0, 00:14:27.667 "fast_io_fail_timeout_sec": 0, 00:14:27.667 "disable_auto_failback": false, 00:14:27.667 "generate_uuids": false, 00:14:27.667 "transport_tos": 0, 00:14:27.667 "nvme_error_stat": false, 00:14:27.667 "rdma_srq_size": 0, 00:14:27.667 "io_path_stat": false, 00:14:27.667 "allow_accel_sequence": false, 00:14:27.667 "rdma_max_cq_size": 0, 00:14:27.667 "rdma_cm_event_timeout_ms": 0, 00:14:27.667 "dhchap_digests": [ 00:14:27.667 "sha256", 00:14:27.667 "sha384", 00:14:27.667 "sha512" 00:14:27.667 ], 00:14:27.667 "dhchap_dhgroups": [ 00:14:27.667 "null", 00:14:27.667 "ffdhe2048", 00:14:27.667 "ffdhe3072", 00:14:27.667 "ffdhe4096", 00:14:27.667 "ffdhe6144", 00:14:27.667 "ffdhe8192" 00:14:27.667 ] 00:14:27.667 } 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "method": "bdev_nvme_set_hotplug", 00:14:27.667 "params": { 00:14:27.667 "period_us": 100000, 00:14:27.667 "enable": false 00:14:27.667 } 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "method": "bdev_malloc_create", 00:14:27.667 "params": { 00:14:27.667 "name": "malloc0", 00:14:27.667 "num_blocks": 8192, 00:14:27.667 "block_size": 4096, 00:14:27.667 "physical_block_size": 4096, 00:14:27.667 "uuid": "7b363178-8b35-4a51-9036-d94caee32f37", 00:14:27.667 "optimal_io_boundary": 0 00:14:27.667 } 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "method": "bdev_wait_for_examine" 00:14:27.667 } 00:14:27.667 ] 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "subsystem": "nbd", 00:14:27.667 "config": [] 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "subsystem": "scheduler", 00:14:27.667 "config": [ 00:14:27.667 { 00:14:27.667 "method": "framework_set_scheduler", 00:14:27.667 "params": { 00:14:27.667 "name": "static" 00:14:27.667 } 00:14:27.667 } 00:14:27.667 ] 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "subsystem": "nvmf", 00:14:27.667 "config": [ 00:14:27.667 { 00:14:27.667 "method": "nvmf_set_config", 00:14:27.667 "params": { 00:14:27.667 "discovery_filter": "match_any", 00:14:27.667 "admin_cmd_passthru": { 00:14:27.667 "identify_ctrlr": false 00:14:27.667 } 00:14:27.667 } 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "method": "nvmf_set_max_subsystems", 00:14:27.667 "params": { 00:14:27.667 "max_subsystems": 1024 00:14:27.667 } 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "method": "nvmf_set_crdt", 00:14:27.667 "params": { 00:14:27.667 "crdt1": 0, 00:14:27.667 "crdt2": 0, 00:14:27.667 "crdt3": 0 00:14:27.667 } 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "method": "nvmf_create_transport", 00:14:27.667 "params": { 00:14:27.667 "trtype": "TCP", 00:14:27.667 "max_queue_depth": 128, 00:14:27.667 "max_io_qpairs_per_ctrlr": 127, 00:14:27.667 "in_capsule_data_size": 4096, 00:14:27.667 "max_io_size": 131072, 00:14:27.667 "io_unit_size": 131072, 00:14:27.667 "max_aq_depth": 128, 00:14:27.667 "num_shared_buffers": 511, 00:14:27.667 "buf_cache_size": 4294967295, 00:14:27.667 "dif_insert_or_strip": false, 00:14:27.667 "zcopy": false, 00:14:27.667 "c2h_success": false, 00:14:27.667 "sock_priority": 0, 00:14:27.667 "abort_timeout_sec": 1, 00:14:27.667 "ack_timeout": 0, 00:14:27.667 "data_wr_pool_size": 0 00:14:27.667 } 00:14:27.667 }, 00:14:27.667 { 00:14:27.667 "method": "nvmf_create_subsystem", 00:14:27.667 "params": { 00:14:27.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.668 "allow_any_host": false, 00:14:27.668 "serial_number": "SPDK00000000000001", 00:14:27.668 "model_number": "SPDK bdev Controller", 00:14:27.668 "max_namespaces": 10, 00:14:27.668 "min_cntlid": 1, 00:14:27.668 "max_cntlid": 65519, 00:14:27.668 "ana_reporting": false 00:14:27.668 } 00:14:27.668 }, 00:14:27.668 { 00:14:27.668 "method": "nvmf_subsystem_add_host", 00:14:27.668 "params": { 00:14:27.668 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.668 "host": "nqn.2016-06.io.spdk:host1", 00:14:27.668 "psk": "/tmp/tmp.ceMYtF1fqw" 00:14:27.668 } 00:14:27.668 }, 00:14:27.668 { 00:14:27.668 "method": "nvmf_subsystem_add_ns", 00:14:27.668 "params": { 00:14:27.668 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.668 "namespace": { 00:14:27.668 "nsid": 1, 00:14:27.668 "bdev_name": "malloc0", 00:14:27.668 "nguid": "7B3631788B354A519036D94CAEE32F37", 00:14:27.668 "uuid": "7b363178-8b35-4a51-9036-d94caee32f37", 00:14:27.668 "no_auto_visible": false 00:14:27.668 } 00:14:27.668 } 00:14:27.668 }, 00:14:27.668 { 00:14:27.668 "method": "nvmf_subsystem_add_listener", 00:14:27.668 "params": { 00:14:27.668 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.668 "listen_address": { 00:14:27.668 "trtype": "TCP", 00:14:27.668 "adrfam": "IPv4", 00:14:27.668 "traddr": "10.0.0.2", 00:14:27.668 "trsvcid": "4420" 00:14:27.668 }, 00:14:27.668 "secure_channel": true 00:14:27.668 } 00:14:27.668 } 00:14:27.668 ] 00:14:27.668 } 00:14:27.668 ] 00:14:27.668 }' 00:14:27.668 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:27.668 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.668 06:01:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85048 00:14:27.668 06:01:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85048 00:14:27.668 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85048 ']' 00:14:27.668 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.668 06:01:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:27.668 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.668 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.668 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.668 06:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.925 [2024-07-13 06:01:19.430481] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:27.925 [2024-07-13 06:01:19.430578] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.925 [2024-07-13 06:01:19.565570] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.925 [2024-07-13 06:01:19.600888] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.925 [2024-07-13 06:01:19.600943] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.925 [2024-07-13 06:01:19.600955] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.925 [2024-07-13 06:01:19.600963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.925 [2024-07-13 06:01:19.600969] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.925 [2024-07-13 06:01:19.601042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.182 [2024-07-13 06:01:19.745199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:28.182 [2024-07-13 06:01:19.790718] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.182 [2024-07-13 06:01:19.806652] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:28.182 [2024-07-13 06:01:19.822658] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:28.182 [2024-07-13 06:01:19.823006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.749 06:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.749 06:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:28.749 06:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:28.749 06:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:28.749 06:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:28.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:28.749 06:01:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.749 06:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=85079 00:14:28.749 06:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 85079 /var/tmp/bdevperf.sock 00:14:28.749 06:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85079 ']' 00:14:28.749 06:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:28.749 06:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.749 06:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:28.749 06:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:28.749 "subsystems": [ 00:14:28.749 { 00:14:28.749 "subsystem": "keyring", 00:14:28.749 "config": [] 00:14:28.749 }, 00:14:28.749 { 00:14:28.749 "subsystem": "iobuf", 00:14:28.749 "config": [ 00:14:28.749 { 00:14:28.749 "method": "iobuf_set_options", 00:14:28.749 "params": { 00:14:28.749 "small_pool_count": 8192, 00:14:28.749 "large_pool_count": 1024, 00:14:28.749 "small_bufsize": 8192, 00:14:28.749 "large_bufsize": 135168 00:14:28.749 } 00:14:28.749 } 00:14:28.749 ] 00:14:28.749 }, 00:14:28.749 { 00:14:28.749 "subsystem": "sock", 00:14:28.749 "config": [ 00:14:28.749 { 00:14:28.749 "method": "sock_set_default_impl", 00:14:28.749 "params": { 00:14:28.749 "impl_name": "uring" 00:14:28.749 } 00:14:28.749 }, 00:14:28.749 { 00:14:28.749 "method": "sock_impl_set_options", 00:14:28.749 "params": { 00:14:28.749 "impl_name": "ssl", 00:14:28.749 "recv_buf_size": 4096, 00:14:28.749 "send_buf_size": 4096, 00:14:28.749 "enable_recv_pipe": true, 00:14:28.749 "enable_quickack": false, 00:14:28.749 "enable_placement_id": 0, 00:14:28.749 "enable_zerocopy_send_server": true, 00:14:28.749 "enable_zerocopy_send_client": false, 00:14:28.749 "zerocopy_threshold": 0, 00:14:28.749 "tls_version": 0, 00:14:28.749 "enable_ktls": false 00:14:28.749 } 00:14:28.749 }, 00:14:28.749 { 00:14:28.749 "method": "sock_impl_set_options", 00:14:28.749 "params": { 00:14:28.749 "impl_name": "posix", 00:14:28.749 "recv_buf_size": 2097152, 00:14:28.749 "send_buf_size": 2097152, 00:14:28.749 "enable_recv_pipe": true, 00:14:28.749 "enable_quickack": false, 00:14:28.749 "enable_placement_id": 0, 00:14:28.749 "enable_zerocopy_send_server": true, 00:14:28.749 "enable_zerocopy_send_client": false, 00:14:28.749 "zerocopy_threshold": 0, 00:14:28.749 "tls_version": 0, 00:14:28.749 "enable_ktls": false 00:14:28.749 } 00:14:28.749 }, 00:14:28.749 { 00:14:28.749 "method": "sock_impl_set_options", 00:14:28.749 "params": { 00:14:28.749 "impl_name": "uring", 00:14:28.749 "recv_buf_size": 2097152, 00:14:28.749 "send_buf_size": 2097152, 00:14:28.749 "enable_recv_pipe": true, 00:14:28.749 "enable_quickack": false, 00:14:28.749 "enable_placement_id": 0, 00:14:28.749 "enable_zerocopy_send_server": false, 00:14:28.749 "enable_zerocopy_send_client": false, 00:14:28.749 "zerocopy_threshold": 0, 00:14:28.749 "tls_version": 0, 00:14:28.749 "enable_ktls": false 00:14:28.749 } 00:14:28.749 } 00:14:28.749 ] 00:14:28.749 }, 00:14:28.749 { 00:14:28.749 "subsystem": "vmd", 00:14:28.749 "config": [] 00:14:28.749 }, 00:14:28.749 { 00:14:28.749 "subsystem": "accel", 00:14:28.749 "config": [ 00:14:28.749 { 00:14:28.749 "method": "accel_set_options", 00:14:28.749 "params": { 00:14:28.749 "small_cache_size": 128, 00:14:28.749 "large_cache_size": 16, 00:14:28.749 "task_count": 2048, 00:14:28.749 "sequence_count": 2048, 00:14:28.749 "buf_count": 2048 00:14:28.749 } 00:14:28.749 } 00:14:28.749 ] 00:14:28.749 }, 00:14:28.749 { 00:14:28.749 "subsystem": "bdev", 00:14:28.749 "config": [ 00:14:28.749 { 00:14:28.749 "method": "bdev_set_options", 00:14:28.749 "params": { 00:14:28.749 "bdev_io_pool_size": 65535, 00:14:28.749 "bdev_io_cache_size": 256, 00:14:28.749 "bdev_auto_examine": true, 00:14:28.749 "iobuf_small_cache_size": 128, 00:14:28.749 "iobuf_large_cache_size": 16 00:14:28.749 } 00:14:28.749 }, 00:14:28.750 { 00:14:28.750 "method": "bdev_raid_set_options", 00:14:28.750 "params": { 00:14:28.750 "process_window_size_kb": 1024 00:14:28.750 } 00:14:28.750 }, 00:14:28.750 { 00:14:28.750 "method": "bdev_iscsi_set_options", 00:14:28.750 "params": { 00:14:28.750 "timeout_sec": 30 00:14:28.750 } 00:14:28.750 }, 00:14:28.750 { 00:14:28.750 "method": "bdev_nvme_set_options", 00:14:28.750 "params": { 00:14:28.750 "action_on_timeout": "none", 00:14:28.750 "timeout_us": 0, 00:14:28.750 "timeout_admin_us": 0, 00:14:28.750 "keep_alive_timeout_ms": 10000, 00:14:28.750 "arbitration_burst": 0, 00:14:28.750 "low_priority_weight": 0, 00:14:28.750 "medium_priority_weight": 0, 00:14:28.750 "high_priority_weight": 0, 00:14:28.750 "nvme_adminq_poll_period_us": 10000, 00:14:28.750 "nvme_ioq_poll_period_us": 0, 00:14:28.750 "io_queue_requests": 512, 00:14:28.750 "delay_cmd_submit": true, 00:14:28.750 "transport_retry_count": 4, 00:14:28.750 "bdev_retry_count": 3, 00:14:28.750 "transport_ack_timeout": 0, 00:14:28.750 "ctrlr_loss_timeout_sec": 0, 00:14:28.750 "reconnect_delay_sec": 0, 00:14:28.750 "fast_io_fail_timeout_sec": 0, 00:14:28.750 "disable_auto_failback": false, 00:14:28.750 "generate_uuids": false, 00:14:28.750 "transport_tos": 0, 00:14:28.750 "nvme_error_stat": false, 00:14:28.750 "rdma_srq_size": 0, 00:14:28.750 "io_path_stat": false, 00:14:28.750 "allow_accel_sequence": false, 00:14:28.750 "rdma_max_cq_size": 0, 00:14:28.750 "rdma_cm_event_timeout_ms": 0, 00:14:28.750 "dhchap_digests": [ 00:14:28.750 "sha256", 00:14:28.750 "sha384", 00:14:28.750 "sha512" 00:14:28.750 ], 00:14:28.750 "dhchap_dhgroups": [ 00:14:28.750 "null", 00:14:28.750 "ffdhe2048", 00:14:28.750 "ffdhe3072", 00:14:28.750 "ffdhe4096", 00:14:28.750 "ffdhe6144", 00:14:28.750 "ffdhe8192" 00:14:28.750 ] 00:14:28.750 } 00:14:28.750 }, 00:14:28.750 { 00:14:28.750 "method": "bdev_nvme_attach_controller", 00:14:28.750 "params": { 00:14:28.750 "name": "TLSTEST", 00:14:28.750 "trtype": "TCP", 00:14:28.750 "adrfam": "IPv4", 00:14:28.750 "traddr": "10.0.0.2", 00:14:28.750 "trsvcid": "4420", 00:14:28.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.750 "prchk_reftag": false, 00:14:28.750 "prchk_guard": false, 00:14:28.750 "ctrlr_loss_timeout_sec": 0, 00:14:28.750 "reconnect_delay_sec": 0, 00:14:28.750 "fast_io_fail_timeout_sec": 0, 00:14:28.750 "psk": "/tmp/tmp.ceMYtF1fqw", 00:14:28.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:28.750 "hdgst": false, 00:14:28.750 "ddgst": false 00:14:28.750 } 00:14:28.750 }, 00:14:28.750 { 00:14:28.750 "method": "bdev_nvme_set_hotplug", 00:14:28.750 "params": { 00:14:28.750 "period_us": 100000, 00:14:28.750 "enable": false 00:14:28.750 } 00:14:28.750 }, 00:14:28.750 { 00:14:28.750 "method": "bdev_wait_for_examine" 00:14:28.750 } 00:14:28.750 ] 00:14:28.750 }, 00:14:28.750 { 00:14:28.750 "subsystem": "nbd", 00:14:28.750 "config": [] 00:14:28.750 } 00:14:28.750 ] 00:14:28.750 }' 00:14:28.750 06:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:28.750 06:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.750 06:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.008 [2024-07-13 06:01:20.485263] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:29.008 [2024-07-13 06:01:20.485689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85079 ] 00:14:29.008 [2024-07-13 06:01:20.627959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.008 [2024-07-13 06:01:20.670314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.266 [2024-07-13 06:01:20.778399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:29.266 [2024-07-13 06:01:20.796960] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:29.266 [2024-07-13 06:01:20.797362] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:29.831 06:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.831 06:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:29.831 06:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:30.089 Running I/O for 10 seconds... 00:14:40.052 00:14:40.052 Latency(us) 00:14:40.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.053 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:40.053 Verification LBA range: start 0x0 length 0x2000 00:14:40.053 TLSTESTn1 : 10.02 4095.61 16.00 0.00 0.00 31189.51 7596.22 27763.43 00:14:40.053 =================================================================================================================== 00:14:40.053 Total : 4095.61 16.00 0.00 0.00 31189.51 7596.22 27763.43 00:14:40.053 0 00:14:40.053 06:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:40.053 06:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 85079 00:14:40.053 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85079 ']' 00:14:40.053 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85079 00:14:40.053 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:40.053 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:40.053 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85079 00:14:40.053 killing process with pid 85079 00:14:40.053 Received shutdown signal, test time was about 10.000000 seconds 00:14:40.053 00:14:40.053 Latency(us) 00:14:40.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.053 =================================================================================================================== 00:14:40.053 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:40.053 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:40.053 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:40.053 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85079' 00:14:40.053 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85079 00:14:40.053 [2024-07-13 06:01:31.733095] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:40.053 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85079 00:14:40.311 06:01:31 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 85048 00:14:40.311 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85048 ']' 00:14:40.311 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85048 00:14:40.311 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:40.311 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:40.311 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85048 00:14:40.311 killing process with pid 85048 00:14:40.311 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:40.311 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:40.311 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85048' 00:14:40.311 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85048 00:14:40.311 [2024-07-13 06:01:31.908800] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:40.311 06:01:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85048 00:14:40.568 06:01:32 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:40.568 06:01:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.568 06:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.568 06:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.568 06:01:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85219 00:14:40.568 06:01:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85219 00:14:40.568 06:01:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:40.568 06:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85219 ']' 00:14:40.568 06:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.568 06:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.568 06:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.568 06:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.568 06:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.568 [2024-07-13 06:01:32.125719] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:40.568 [2024-07-13 06:01:32.125806] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.568 [2024-07-13 06:01:32.265297] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.827 [2024-07-13 06:01:32.306236] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.827 [2024-07-13 06:01:32.306312] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.827 [2024-07-13 06:01:32.306326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.827 [2024-07-13 06:01:32.306336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.827 [2024-07-13 06:01:32.306345] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.827 [2024-07-13 06:01:32.306397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.827 [2024-07-13 06:01:32.340909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:40.827 06:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.827 06:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:40.827 06:01:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.827 06:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:40.827 06:01:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.827 06:01:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.827 06:01:32 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ceMYtF1fqw 00:14:40.827 06:01:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ceMYtF1fqw 00:14:40.827 06:01:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:41.085 [2024-07-13 06:01:32.674165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.085 06:01:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:41.349 06:01:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:41.608 [2024-07-13 06:01:33.154302] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:41.608 [2024-07-13 06:01:33.154597] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.608 06:01:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:41.865 malloc0 00:14:41.865 06:01:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:42.122 06:01:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ceMYtF1fqw 00:14:42.381 [2024-07-13 06:01:33.897365] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:42.381 06:01:33 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:42.381 06:01:33 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85261 00:14:42.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.381 06:01:33 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.381 06:01:33 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85261 /var/tmp/bdevperf.sock 00:14:42.381 06:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85261 ']' 00:14:42.381 06:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.381 06:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.381 06:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.381 06:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.381 06:01:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.381 [2024-07-13 06:01:33.966074] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:42.381 [2024-07-13 06:01:33.966321] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85261 ] 00:14:42.381 [2024-07-13 06:01:34.101581] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.640 [2024-07-13 06:01:34.144880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.641 [2024-07-13 06:01:34.178495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:43.205 06:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.205 06:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:43.205 06:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ceMYtF1fqw 00:14:43.464 06:01:35 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:43.722 [2024-07-13 06:01:35.291605] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:43.722 nvme0n1 00:14:43.722 06:01:35 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:43.980 Running I/O for 1 seconds... 00:14:44.953 00:14:44.953 Latency(us) 00:14:44.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.953 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:44.953 Verification LBA range: start 0x0 length 0x2000 00:14:44.953 nvme0n1 : 1.02 3999.85 15.62 0.00 0.00 31653.88 7000.44 20494.89 00:14:44.953 =================================================================================================================== 00:14:44.953 Total : 3999.85 15.62 0.00 0.00 31653.88 7000.44 20494.89 00:14:44.953 0 00:14:44.953 06:01:36 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 85261 00:14:44.953 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85261 ']' 00:14:44.953 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85261 00:14:44.953 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:44.953 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.953 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85261 00:14:44.953 killing process with pid 85261 00:14:44.953 Received shutdown signal, test time was about 1.000000 seconds 00:14:44.953 00:14:44.953 Latency(us) 00:14:44.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.953 =================================================================================================================== 00:14:44.953 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.953 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:44.953 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:44.953 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85261' 00:14:44.953 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85261 00:14:44.953 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85261 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 85219 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85219 ']' 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85219 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85219 00:14:45.211 killing process with pid 85219 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85219' 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85219 00:14:45.211 [2024-07-13 06:01:36.749166] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85219 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85306 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85306 00:14:45.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85306 ']' 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:45.211 06:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.469 [2024-07-13 06:01:36.956582] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:45.469 [2024-07-13 06:01:36.956962] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.469 [2024-07-13 06:01:37.103378] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.469 [2024-07-13 06:01:37.139304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.469 [2024-07-13 06:01:37.139360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.469 [2024-07-13 06:01:37.139414] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.469 [2024-07-13 06:01:37.139423] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.469 [2024-07-13 06:01:37.139429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.469 [2024-07-13 06:01:37.139453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.469 [2024-07-13 06:01:37.168261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.728 [2024-07-13 06:01:37.259204] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.728 malloc0 00:14:45.728 [2024-07-13 06:01:37.286652] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:45.728 [2024-07-13 06:01:37.286882] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=85331 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 85331 /var/tmp/bdevperf.sock 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85331 ']' 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:45.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:45.728 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.728 [2024-07-13 06:01:37.395410] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:45.728 [2024-07-13 06:01:37.395889] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85331 ] 00:14:45.986 [2024-07-13 06:01:37.540668] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.986 [2024-07-13 06:01:37.577192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.986 [2024-07-13 06:01:37.608997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:45.986 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.986 06:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:45.986 06:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ceMYtF1fqw 00:14:46.244 06:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:46.501 [2024-07-13 06:01:38.156614] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:46.759 nvme0n1 00:14:46.759 06:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:46.759 Running I/O for 1 seconds... 00:14:47.691 00:14:47.691 Latency(us) 00:14:47.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.691 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.691 Verification LBA range: start 0x0 length 0x2000 00:14:47.691 nvme0n1 : 1.02 3930.66 15.35 0.00 0.00 32106.30 5600.35 20375.74 00:14:47.691 =================================================================================================================== 00:14:47.691 Total : 3930.66 15.35 0.00 0.00 32106.30 5600.35 20375.74 00:14:47.691 0 00:14:47.691 06:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:14:47.691 06:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.691 06:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.948 06:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.948 06:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:14:47.948 "subsystems": [ 00:14:47.948 { 00:14:47.948 "subsystem": "keyring", 00:14:47.948 "config": [ 00:14:47.948 { 00:14:47.948 "method": "keyring_file_add_key", 00:14:47.948 "params": { 00:14:47.948 "name": "key0", 00:14:47.948 "path": "/tmp/tmp.ceMYtF1fqw" 00:14:47.948 } 00:14:47.948 } 00:14:47.948 ] 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "subsystem": "iobuf", 00:14:47.948 "config": [ 00:14:47.948 { 00:14:47.948 "method": "iobuf_set_options", 00:14:47.948 "params": { 00:14:47.948 "small_pool_count": 8192, 00:14:47.948 "large_pool_count": 1024, 00:14:47.948 "small_bufsize": 8192, 00:14:47.948 "large_bufsize": 135168 00:14:47.948 } 00:14:47.948 } 00:14:47.948 ] 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "subsystem": "sock", 00:14:47.948 "config": [ 00:14:47.948 { 00:14:47.948 "method": "sock_set_default_impl", 00:14:47.948 "params": { 00:14:47.948 "impl_name": "uring" 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "sock_impl_set_options", 00:14:47.948 "params": { 00:14:47.948 "impl_name": "ssl", 00:14:47.948 "recv_buf_size": 4096, 00:14:47.948 "send_buf_size": 4096, 00:14:47.948 "enable_recv_pipe": true, 00:14:47.948 "enable_quickack": false, 00:14:47.948 "enable_placement_id": 0, 00:14:47.948 "enable_zerocopy_send_server": true, 00:14:47.948 "enable_zerocopy_send_client": false, 00:14:47.948 "zerocopy_threshold": 0, 00:14:47.948 "tls_version": 0, 00:14:47.948 "enable_ktls": false 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "sock_impl_set_options", 00:14:47.948 "params": { 00:14:47.948 "impl_name": "posix", 00:14:47.948 "recv_buf_size": 2097152, 00:14:47.948 "send_buf_size": 2097152, 00:14:47.948 "enable_recv_pipe": true, 00:14:47.948 "enable_quickack": false, 00:14:47.948 "enable_placement_id": 0, 00:14:47.948 "enable_zerocopy_send_server": true, 00:14:47.948 "enable_zerocopy_send_client": false, 00:14:47.948 "zerocopy_threshold": 0, 00:14:47.948 "tls_version": 0, 00:14:47.948 "enable_ktls": false 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "sock_impl_set_options", 00:14:47.948 "params": { 00:14:47.948 "impl_name": "uring", 00:14:47.948 "recv_buf_size": 2097152, 00:14:47.948 "send_buf_size": 2097152, 00:14:47.948 "enable_recv_pipe": true, 00:14:47.948 "enable_quickack": false, 00:14:47.948 "enable_placement_id": 0, 00:14:47.948 "enable_zerocopy_send_server": false, 00:14:47.948 "enable_zerocopy_send_client": false, 00:14:47.948 "zerocopy_threshold": 0, 00:14:47.948 "tls_version": 0, 00:14:47.948 "enable_ktls": false 00:14:47.948 } 00:14:47.948 } 00:14:47.948 ] 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "subsystem": "vmd", 00:14:47.948 "config": [] 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "subsystem": "accel", 00:14:47.948 "config": [ 00:14:47.948 { 00:14:47.948 "method": "accel_set_options", 00:14:47.948 "params": { 00:14:47.948 "small_cache_size": 128, 00:14:47.948 "large_cache_size": 16, 00:14:47.948 "task_count": 2048, 00:14:47.948 "sequence_count": 2048, 00:14:47.948 "buf_count": 2048 00:14:47.948 } 00:14:47.948 } 00:14:47.948 ] 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "subsystem": "bdev", 00:14:47.948 "config": [ 00:14:47.948 { 00:14:47.948 "method": "bdev_set_options", 00:14:47.948 "params": { 00:14:47.948 "bdev_io_pool_size": 65535, 00:14:47.948 "bdev_io_cache_size": 256, 00:14:47.948 "bdev_auto_examine": true, 00:14:47.948 "iobuf_small_cache_size": 128, 00:14:47.948 "iobuf_large_cache_size": 16 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "bdev_raid_set_options", 00:14:47.948 "params": { 00:14:47.948 "process_window_size_kb": 1024 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "bdev_iscsi_set_options", 00:14:47.948 "params": { 00:14:47.948 "timeout_sec": 30 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "bdev_nvme_set_options", 00:14:47.948 "params": { 00:14:47.948 "action_on_timeout": "none", 00:14:47.948 "timeout_us": 0, 00:14:47.948 "timeout_admin_us": 0, 00:14:47.948 "keep_alive_timeout_ms": 10000, 00:14:47.948 "arbitration_burst": 0, 00:14:47.948 "low_priority_weight": 0, 00:14:47.948 "medium_priority_weight": 0, 00:14:47.948 "high_priority_weight": 0, 00:14:47.948 "nvme_adminq_poll_period_us": 10000, 00:14:47.948 "nvme_ioq_poll_period_us": 0, 00:14:47.948 "io_queue_requests": 0, 00:14:47.948 "delay_cmd_submit": true, 00:14:47.948 "transport_retry_count": 4, 00:14:47.948 "bdev_retry_count": 3, 00:14:47.948 "transport_ack_timeout": 0, 00:14:47.948 "ctrlr_loss_timeout_sec": 0, 00:14:47.948 "reconnect_delay_sec": 0, 00:14:47.948 "fast_io_fail_timeout_sec": 0, 00:14:47.948 "disable_auto_failback": false, 00:14:47.948 "generate_uuids": false, 00:14:47.948 "transport_tos": 0, 00:14:47.948 "nvme_error_stat": false, 00:14:47.948 "rdma_srq_size": 0, 00:14:47.948 "io_path_stat": false, 00:14:47.948 "allow_accel_sequence": false, 00:14:47.948 "rdma_max_cq_size": 0, 00:14:47.948 "rdma_cm_event_timeout_ms": 0, 00:14:47.948 "dhchap_digests": [ 00:14:47.948 "sha256", 00:14:47.948 "sha384", 00:14:47.948 "sha512" 00:14:47.948 ], 00:14:47.948 "dhchap_dhgroups": [ 00:14:47.948 "null", 00:14:47.948 "ffdhe2048", 00:14:47.948 "ffdhe3072", 00:14:47.948 "ffdhe4096", 00:14:47.948 "ffdhe6144", 00:14:47.948 "ffdhe8192" 00:14:47.948 ] 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "bdev_nvme_set_hotplug", 00:14:47.948 "params": { 00:14:47.948 "period_us": 100000, 00:14:47.948 "enable": false 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "bdev_malloc_create", 00:14:47.948 "params": { 00:14:47.948 "name": "malloc0", 00:14:47.948 "num_blocks": 8192, 00:14:47.948 "block_size": 4096, 00:14:47.948 "physical_block_size": 4096, 00:14:47.948 "uuid": "b64b90dc-961c-4175-8737-5ba71aae484b", 00:14:47.948 "optimal_io_boundary": 0 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "bdev_wait_for_examine" 00:14:47.948 } 00:14:47.948 ] 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "subsystem": "nbd", 00:14:47.948 "config": [] 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "subsystem": "scheduler", 00:14:47.948 "config": [ 00:14:47.948 { 00:14:47.948 "method": "framework_set_scheduler", 00:14:47.948 "params": { 00:14:47.948 "name": "static" 00:14:47.948 } 00:14:47.948 } 00:14:47.948 ] 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "subsystem": "nvmf", 00:14:47.948 "config": [ 00:14:47.948 { 00:14:47.948 "method": "nvmf_set_config", 00:14:47.948 "params": { 00:14:47.948 "discovery_filter": "match_any", 00:14:47.948 "admin_cmd_passthru": { 00:14:47.948 "identify_ctrlr": false 00:14:47.948 } 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "nvmf_set_max_subsystems", 00:14:47.948 "params": { 00:14:47.948 "max_subsystems": 1024 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "nvmf_set_crdt", 00:14:47.948 "params": { 00:14:47.948 "crdt1": 0, 00:14:47.948 "crdt2": 0, 00:14:47.948 "crdt3": 0 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "nvmf_create_transport", 00:14:47.948 "params": { 00:14:47.948 "trtype": "TCP", 00:14:47.948 "max_queue_depth": 128, 00:14:47.948 "max_io_qpairs_per_ctrlr": 127, 00:14:47.948 "in_capsule_data_size": 4096, 00:14:47.948 "max_io_size": 131072, 00:14:47.948 "io_unit_size": 131072, 00:14:47.948 "max_aq_depth": 128, 00:14:47.948 "num_shared_buffers": 511, 00:14:47.948 "buf_cache_size": 4294967295, 00:14:47.948 "dif_insert_or_strip": false, 00:14:47.948 "zcopy": false, 00:14:47.948 "c2h_success": false, 00:14:47.948 "sock_priority": 0, 00:14:47.948 "abort_timeout_sec": 1, 00:14:47.948 "ack_timeout": 0, 00:14:47.948 "data_wr_pool_size": 0 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "nvmf_create_subsystem", 00:14:47.948 "params": { 00:14:47.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.948 "allow_any_host": false, 00:14:47.948 "serial_number": "00000000000000000000", 00:14:47.948 "model_number": "SPDK bdev Controller", 00:14:47.948 "max_namespaces": 32, 00:14:47.948 "min_cntlid": 1, 00:14:47.948 "max_cntlid": 65519, 00:14:47.948 "ana_reporting": false 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "nvmf_subsystem_add_host", 00:14:47.948 "params": { 00:14:47.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.948 "host": "nqn.2016-06.io.spdk:host1", 00:14:47.948 "psk": "key0" 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "nvmf_subsystem_add_ns", 00:14:47.948 "params": { 00:14:47.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.948 "namespace": { 00:14:47.948 "nsid": 1, 00:14:47.948 "bdev_name": "malloc0", 00:14:47.948 "nguid": "B64B90DC961C417587375BA71AAE484B", 00:14:47.948 "uuid": "b64b90dc-961c-4175-8737-5ba71aae484b", 00:14:47.948 "no_auto_visible": false 00:14:47.948 } 00:14:47.948 } 00:14:47.948 }, 00:14:47.948 { 00:14:47.948 "method": "nvmf_subsystem_add_listener", 00:14:47.948 "params": { 00:14:47.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.948 "listen_address": { 00:14:47.948 "trtype": "TCP", 00:14:47.948 "adrfam": "IPv4", 00:14:47.948 "traddr": "10.0.0.2", 00:14:47.948 "trsvcid": "4420" 00:14:47.948 }, 00:14:47.948 "secure_channel": true 00:14:47.948 } 00:14:47.948 } 00:14:47.948 ] 00:14:47.948 } 00:14:47.948 ] 00:14:47.948 }' 00:14:47.948 06:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:48.206 06:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:14:48.206 "subsystems": [ 00:14:48.206 { 00:14:48.206 "subsystem": "keyring", 00:14:48.206 "config": [ 00:14:48.206 { 00:14:48.206 "method": "keyring_file_add_key", 00:14:48.206 "params": { 00:14:48.206 "name": "key0", 00:14:48.206 "path": "/tmp/tmp.ceMYtF1fqw" 00:14:48.206 } 00:14:48.206 } 00:14:48.206 ] 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "subsystem": "iobuf", 00:14:48.206 "config": [ 00:14:48.206 { 00:14:48.206 "method": "iobuf_set_options", 00:14:48.206 "params": { 00:14:48.206 "small_pool_count": 8192, 00:14:48.206 "large_pool_count": 1024, 00:14:48.206 "small_bufsize": 8192, 00:14:48.206 "large_bufsize": 135168 00:14:48.206 } 00:14:48.206 } 00:14:48.206 ] 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "subsystem": "sock", 00:14:48.206 "config": [ 00:14:48.206 { 00:14:48.206 "method": "sock_set_default_impl", 00:14:48.206 "params": { 00:14:48.206 "impl_name": "uring" 00:14:48.206 } 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "method": "sock_impl_set_options", 00:14:48.206 "params": { 00:14:48.206 "impl_name": "ssl", 00:14:48.206 "recv_buf_size": 4096, 00:14:48.206 "send_buf_size": 4096, 00:14:48.206 "enable_recv_pipe": true, 00:14:48.206 "enable_quickack": false, 00:14:48.206 "enable_placement_id": 0, 00:14:48.206 "enable_zerocopy_send_server": true, 00:14:48.206 "enable_zerocopy_send_client": false, 00:14:48.206 "zerocopy_threshold": 0, 00:14:48.206 "tls_version": 0, 00:14:48.206 "enable_ktls": false 00:14:48.206 } 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "method": "sock_impl_set_options", 00:14:48.206 "params": { 00:14:48.206 "impl_name": "posix", 00:14:48.206 "recv_buf_size": 2097152, 00:14:48.206 "send_buf_size": 2097152, 00:14:48.206 "enable_recv_pipe": true, 00:14:48.206 "enable_quickack": false, 00:14:48.206 "enable_placement_id": 0, 00:14:48.206 "enable_zerocopy_send_server": true, 00:14:48.206 "enable_zerocopy_send_client": false, 00:14:48.206 "zerocopy_threshold": 0, 00:14:48.206 "tls_version": 0, 00:14:48.206 "enable_ktls": false 00:14:48.206 } 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "method": "sock_impl_set_options", 00:14:48.206 "params": { 00:14:48.206 "impl_name": "uring", 00:14:48.206 "recv_buf_size": 2097152, 00:14:48.206 "send_buf_size": 2097152, 00:14:48.206 "enable_recv_pipe": true, 00:14:48.206 "enable_quickack": false, 00:14:48.206 "enable_placement_id": 0, 00:14:48.206 "enable_zerocopy_send_server": false, 00:14:48.206 "enable_zerocopy_send_client": false, 00:14:48.206 "zerocopy_threshold": 0, 00:14:48.206 "tls_version": 0, 00:14:48.206 "enable_ktls": false 00:14:48.206 } 00:14:48.206 } 00:14:48.206 ] 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "subsystem": "vmd", 00:14:48.206 "config": [] 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "subsystem": "accel", 00:14:48.206 "config": [ 00:14:48.206 { 00:14:48.206 "method": "accel_set_options", 00:14:48.206 "params": { 00:14:48.206 "small_cache_size": 128, 00:14:48.206 "large_cache_size": 16, 00:14:48.206 "task_count": 2048, 00:14:48.206 "sequence_count": 2048, 00:14:48.206 "buf_count": 2048 00:14:48.206 } 00:14:48.206 } 00:14:48.206 ] 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "subsystem": "bdev", 00:14:48.206 "config": [ 00:14:48.206 { 00:14:48.206 "method": "bdev_set_options", 00:14:48.206 "params": { 00:14:48.206 "bdev_io_pool_size": 65535, 00:14:48.206 "bdev_io_cache_size": 256, 00:14:48.206 "bdev_auto_examine": true, 00:14:48.206 "iobuf_small_cache_size": 128, 00:14:48.206 "iobuf_large_cache_size": 16 00:14:48.206 } 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "method": "bdev_raid_set_options", 00:14:48.206 "params": { 00:14:48.206 "process_window_size_kb": 1024 00:14:48.206 } 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "method": "bdev_iscsi_set_options", 00:14:48.206 "params": { 00:14:48.206 "timeout_sec": 30 00:14:48.206 } 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "method": "bdev_nvme_set_options", 00:14:48.206 "params": { 00:14:48.206 "action_on_timeout": "none", 00:14:48.206 "timeout_us": 0, 00:14:48.206 "timeout_admin_us": 0, 00:14:48.206 "keep_alive_timeout_ms": 10000, 00:14:48.206 "arbitration_burst": 0, 00:14:48.206 "low_priority_weight": 0, 00:14:48.206 "medium_priority_weight": 0, 00:14:48.206 "high_priority_weight": 0, 00:14:48.206 "nvme_adminq_poll_period_us": 10000, 00:14:48.206 "nvme_ioq_poll_period_us": 0, 00:14:48.206 "io_queue_requests": 512, 00:14:48.206 "delay_cmd_submit": true, 00:14:48.206 "transport_retry_count": 4, 00:14:48.206 "bdev_retry_count": 3, 00:14:48.206 "transport_ack_timeout": 0, 00:14:48.206 "ctrlr_loss_timeout_sec": 0, 00:14:48.206 "reconnect_delay_sec": 0, 00:14:48.206 "fast_io_fail_timeout_sec": 0, 00:14:48.206 "disable_auto_failback": false, 00:14:48.206 "generate_uuids": false, 00:14:48.206 "transport_tos": 0, 00:14:48.206 "nvme_error_stat": false, 00:14:48.206 "rdma_srq_size": 0, 00:14:48.206 "io_path_stat": false, 00:14:48.206 "allow_accel_sequence": false, 00:14:48.206 "rdma_max_cq_size": 0, 00:14:48.206 "rdma_cm_event_timeout_ms": 0, 00:14:48.206 "dhchap_digests": [ 00:14:48.206 "sha256", 00:14:48.206 "sha384", 00:14:48.206 "sha512" 00:14:48.206 ], 00:14:48.206 "dhchap_dhgroups": [ 00:14:48.206 "null", 00:14:48.206 "ffdhe2048", 00:14:48.206 "ffdhe3072", 00:14:48.206 "ffdhe4096", 00:14:48.206 "ffdhe6144", 00:14:48.206 "ffdhe8192" 00:14:48.206 ] 00:14:48.206 } 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "method": "bdev_nvme_attach_controller", 00:14:48.206 "params": { 00:14:48.206 "name": "nvme0", 00:14:48.206 "trtype": "TCP", 00:14:48.206 "adrfam": "IPv4", 00:14:48.206 "traddr": "10.0.0.2", 00:14:48.206 "trsvcid": "4420", 00:14:48.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.206 "prchk_reftag": false, 00:14:48.206 "prchk_guard": false, 00:14:48.206 "ctrlr_loss_timeout_sec": 0, 00:14:48.206 "reconnect_delay_sec": 0, 00:14:48.206 "fast_io_fail_timeout_sec": 0, 00:14:48.206 "psk": "key0", 00:14:48.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.206 "hdgst": false, 00:14:48.206 "ddgst": false 00:14:48.206 } 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "method": "bdev_nvme_set_hotplug", 00:14:48.206 "params": { 00:14:48.206 "period_us": 100000, 00:14:48.206 "enable": false 00:14:48.206 } 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "method": "bdev_enable_histogram", 00:14:48.206 "params": { 00:14:48.206 "name": "nvme0n1", 00:14:48.206 "enable": true 00:14:48.206 } 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "method": "bdev_wait_for_examine" 00:14:48.206 } 00:14:48.206 ] 00:14:48.206 }, 00:14:48.206 { 00:14:48.206 "subsystem": "nbd", 00:14:48.206 "config": [] 00:14:48.206 } 00:14:48.206 ] 00:14:48.206 }' 00:14:48.206 06:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 85331 00:14:48.206 06:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85331 ']' 00:14:48.206 06:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85331 00:14:48.206 06:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:48.206 06:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:48.206 06:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85331 00:14:48.206 06:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:48.206 06:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:48.206 06:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85331' 00:14:48.206 killing process with pid 85331 00:14:48.206 Received shutdown signal, test time was about 1.000000 seconds 00:14:48.206 00:14:48.206 Latency(us) 00:14:48.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.206 =================================================================================================================== 00:14:48.206 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.206 06:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85331 00:14:48.206 06:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85331 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 85306 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85306 ']' 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85306 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85306 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:48.464 killing process with pid 85306 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85306' 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85306 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85306 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:48.464 06:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:14:48.464 "subsystems": [ 00:14:48.464 { 00:14:48.464 "subsystem": "keyring", 00:14:48.464 "config": [ 00:14:48.464 { 00:14:48.464 "method": "keyring_file_add_key", 00:14:48.464 "params": { 00:14:48.464 "name": "key0", 00:14:48.464 "path": "/tmp/tmp.ceMYtF1fqw" 00:14:48.464 } 00:14:48.464 } 00:14:48.464 ] 00:14:48.464 }, 00:14:48.464 { 00:14:48.464 "subsystem": "iobuf", 00:14:48.464 "config": [ 00:14:48.464 { 00:14:48.464 "method": "iobuf_set_options", 00:14:48.464 "params": { 00:14:48.464 "small_pool_count": 8192, 00:14:48.464 "large_pool_count": 1024, 00:14:48.464 "small_bufsize": 8192, 00:14:48.464 "large_bufsize": 135168 00:14:48.464 } 00:14:48.464 } 00:14:48.464 ] 00:14:48.464 }, 00:14:48.464 { 00:14:48.464 "subsystem": "sock", 00:14:48.464 "config": [ 00:14:48.464 { 00:14:48.464 "method": "sock_set_default_impl", 00:14:48.464 "params": { 00:14:48.464 "impl_name": "uring" 00:14:48.464 } 00:14:48.464 }, 00:14:48.464 { 00:14:48.465 "method": "sock_impl_set_options", 00:14:48.465 "params": { 00:14:48.465 "impl_name": "ssl", 00:14:48.465 "recv_buf_size": 4096, 00:14:48.465 "send_buf_size": 4096, 00:14:48.465 "enable_recv_pipe": true, 00:14:48.465 "enable_quickack": false, 00:14:48.465 "enable_placement_id": 0, 00:14:48.465 "enable_zerocopy_send_server": true, 00:14:48.465 "enable_zerocopy_send_client": false, 00:14:48.465 "zerocopy_threshold": 0, 00:14:48.465 "tls_version": 0, 00:14:48.465 "enable_ktls": false 00:14:48.465 } 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "method": "sock_impl_set_options", 00:14:48.465 "params": { 00:14:48.465 "impl_name": "posix", 00:14:48.465 "recv_buf_size": 2097152, 00:14:48.465 "send_buf_size": 2097152, 00:14:48.465 "enable_recv_pipe": true, 00:14:48.465 "enable_quickack": false, 00:14:48.465 "enable_placement_id": 0, 00:14:48.465 "enable_zerocopy_send_server": true, 00:14:48.465 "enable_zerocopy_send_client": false, 00:14:48.465 "zerocopy_threshold": 0, 00:14:48.465 "tls_version": 0, 00:14:48.465 "enable_ktls": false 00:14:48.465 } 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "method": "sock_impl_set_options", 00:14:48.465 "params": { 00:14:48.465 "impl_name": "uring", 00:14:48.465 "recv_buf_size": 2097152, 00:14:48.465 "send_buf_size": 2097152, 00:14:48.465 "enable_recv_pipe": true, 00:14:48.465 "enable_quickack": false, 00:14:48.465 "enable_placement_id": 0, 00:14:48.465 "enable_zerocopy_send_server": false, 00:14:48.465 "enable_zerocopy_send_client": false, 00:14:48.465 "zerocopy_threshold": 0, 00:14:48.465 "tls_version": 0, 00:14:48.465 "enable_ktls": false 00:14:48.465 } 00:14:48.465 } 00:14:48.465 ] 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "subsystem": "vmd", 00:14:48.465 "config": [] 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "subsystem": "accel", 00:14:48.465 "config": [ 00:14:48.465 { 00:14:48.465 "method": "accel_set_options", 00:14:48.465 "params": { 00:14:48.465 "small_cache_size": 128, 00:14:48.465 "large_cache_size": 16, 00:14:48.465 "task_count": 2048, 00:14:48.465 "sequence_count": 2048, 00:14:48.465 "buf_count": 2048 00:14:48.465 } 00:14:48.465 } 00:14:48.465 ] 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "subsystem": "bdev", 00:14:48.465 "config": [ 00:14:48.465 { 00:14:48.465 "method": "bdev_set_options", 00:14:48.465 "params": { 00:14:48.465 "bdev_io_pool_size": 65535, 00:14:48.465 "bdev_io_cache_size": 256, 00:14:48.465 "bdev_auto_examine": true, 00:14:48.465 "iobuf_small_cache_size": 128, 00:14:48.465 "iobuf_large_cache_size": 16 00:14:48.465 } 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "method": "bdev_raid_set_options", 00:14:48.465 "params": { 00:14:48.465 "process_window_size_kb": 1024 00:14:48.465 } 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "method": "bdev_iscsi_set_options", 00:14:48.465 "params": { 00:14:48.465 "timeout_sec": 30 00:14:48.465 } 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "method": "bdev_nvme_set_options", 00:14:48.465 "params": { 00:14:48.465 "action_on_timeout": "none", 00:14:48.465 "timeout_us": 0, 00:14:48.465 "timeout_admin_us": 0, 00:14:48.465 "keep_alive_timeout_ms": 10000, 00:14:48.465 "arbitration_burst": 0, 00:14:48.465 "low_priority_weight": 0, 00:14:48.465 "medium_priority_weight": 0, 00:14:48.465 "high_priority_weight": 0, 00:14:48.465 "nvme_adminq_poll_period_us": 10000, 00:14:48.465 "nvme_ioq_poll_period_us": 0, 00:14:48.465 "io_queue_requests": 0, 00:14:48.465 "delay_cmd_submit": true, 00:14:48.465 "transport_retry_count": 4, 00:14:48.465 "bdev_retry_count": 3, 00:14:48.465 "transport_ack_timeout": 0, 00:14:48.465 "ctrlr_loss_timeout_sec": 0, 00:14:48.465 "reconnect_delay_sec": 0, 00:14:48.465 "fast_io_fail_timeout_sec": 0, 00:14:48.465 "disable_auto_failback": false, 00:14:48.465 "generate_uuids": false, 00:14:48.465 "transport_tos": 0, 00:14:48.465 "nvme_error_stat": false, 00:14:48.465 "rdma_srq_size": 0, 00:14:48.465 "io_path_stat": false, 00:14:48.465 "allow_accel_sequence": false, 00:14:48.465 "rdma_max_cq_size": 0, 00:14:48.465 "rdma_cm_event_timeout_ms": 0, 00:14:48.465 "dhchap_digests": [ 00:14:48.465 "sha256", 00:14:48.465 "sha384", 00:14:48.465 "sha512" 00:14:48.465 ], 00:14:48.465 "dhchap_dhgroups": [ 00:14:48.465 "null", 00:14:48.465 "ffdhe2048", 00:14:48.465 "ffdhe3072", 00:14:48.465 "ffdhe4096", 00:14:48.465 "ffdhe6144", 00:14:48.465 "ffdhe8192" 00:14:48.465 ] 00:14:48.465 } 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "method": "bdev_nvme_set_hotplug", 00:14:48.465 "params": { 00:14:48.465 "period_us": 100000, 00:14:48.465 "enable": false 00:14:48.465 } 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "method": "bdev_malloc_create", 00:14:48.465 "params": { 00:14:48.465 "name": "malloc0", 00:14:48.465 "num_blocks": 8192, 00:14:48.465 "block_size": 4096, 00:14:48.465 "physical_block_size": 4096, 00:14:48.465 "uuid": "b64b90dc-961c-4175-8737-5ba71aae484b", 00:14:48.465 "optimal_io_boundary": 0 00:14:48.465 } 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "method": "bdev_wait_for_examine" 00:14:48.465 } 00:14:48.465 ] 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "subsystem": "nbd", 00:14:48.465 "config": [] 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "subsystem": "scheduler", 00:14:48.465 "config": [ 00:14:48.465 { 00:14:48.465 "method": "framework_set_scheduler", 00:14:48.465 "params": { 00:14:48.465 "name": "static" 00:14:48.465 } 00:14:48.465 } 00:14:48.465 ] 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "subsystem": "nvmf", 00:14:48.465 "config": [ 00:14:48.465 { 00:14:48.465 "method": "nvmf_set_config", 00:14:48.465 "params": { 00:14:48.465 "discovery_filter": "match_any", 00:14:48.465 "admin_cmd_passthru": { 00:14:48.465 "identify_ctrlr": false 00:14:48.465 } 00:14:48.465 } 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "method": "nvmf_set_max_subsystems", 00:14:48.465 "params": { 00:14:48.465 "max_subsystems": 1024 00:14:48.465 } 00:14:48.465 }, 00:14:48.465 { 00:14:48.465 "method": "nvmf_set_crdt", 00:14:48.466 "params": { 00:14:48.466 "crdt1": 0, 00:14:48.466 "crdt2": 0, 00:14:48.466 "crdt3": 0 00:14:48.466 } 00:14:48.466 }, 00:14:48.466 { 00:14:48.466 "method": "nvmf_create_transport", 00:14:48.466 "params": { 00:14:48.466 "trtype": "TCP", 00:14:48.466 "max_queue_depth": 128, 00:14:48.466 "max_io_qpairs_per_ctrlr": 127, 00:14:48.466 "in_capsule_data_size": 4096, 00:14:48.466 "max_io_size": 131072, 00:14:48.466 "io_unit_size": 131072, 00:14:48.466 "max_aq_depth": 128, 00:14:48.466 "num_shared_buffers": 511, 00:14:48.466 "buf_cache_size": 4294967295, 00:14:48.466 "dif_insert_or_strip": false, 00:14:48.466 "zcopy": false, 00:14:48.466 "c2h_success": false, 00:14:48.466 "sock_priority": 0, 00:14:48.466 "abort_timeout_sec": 1, 00:14:48.466 "ack_timeout": 0, 00:14:48.466 "data_wr_pool_size": 0 00:14:48.466 } 00:14:48.466 }, 00:14:48.466 { 00:14:48.466 "method": "nvmf_create_subsystem", 00:14:48.466 "params": { 00:14:48.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.466 "allow_any_host": false, 00:14:48.466 "serial_number": "00000000000000000000", 00:14:48.466 "model_number": "SPDK bdev Controller", 00:14:48.466 "max_namespaces": 32, 00:14:48.466 "min_cntlid": 1, 00:14:48.466 "max_cntlid": 65519, 00:14:48.466 "ana_reporting": false 00:14:48.466 } 00:14:48.466 }, 00:14:48.466 { 00:14:48.466 "method": "nvmf_subsystem_add_host", 00:14:48.466 "params": { 00:14:48.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.466 "host": "nqn.2016-06.io.spdk:host1", 00:14:48.466 "psk": "key0" 00:14:48.466 } 00:14:48.466 }, 00:14:48.466 { 00:14:48.466 "method": "nvmf_subsystem_add_ns", 00:14:48.466 "params": { 00:14:48.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.466 "namespace": { 00:14:48.466 "nsid": 1, 00:14:48.466 "bdev_name": "malloc0", 00:14:48.466 "nguid": "B64B90DC961C417587375BA71AAE484B", 00:14:48.466 "uuid": "b64b90dc-961c-4175-8737-5ba71aae484b", 00:14:48.466 "no_auto_visible": false 00:14:48.466 } 00:14:48.466 } 00:14:48.466 }, 00:14:48.466 { 00:14:48.466 "method": "nvmf_subsystem_add_listener", 00:14:48.466 "params": { 00:14:48.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.466 "listen_address": { 00:14:48.466 "trtype": "TCP", 00:14:48.466 "adrfam": "IPv4", 00:14:48.466 "traddr": "10.0.0.2", 00:14:48.466 "trsvcid": "4420" 00:14:48.466 }, 00:14:48.466 "secure_channel": true 00:14:48.466 } 00:14:48.466 } 00:14:48.466 ] 00:14:48.466 } 00:14:48.466 ] 00:14:48.466 }' 00:14:48.466 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.724 06:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85384 00:14:48.724 06:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:48.724 06:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85384 00:14:48.724 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85384 ']' 00:14:48.724 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.724 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.724 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.724 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.724 06:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.724 [2024-07-13 06:01:40.251820] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:48.724 [2024-07-13 06:01:40.251913] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.724 [2024-07-13 06:01:40.390602] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.724 [2024-07-13 06:01:40.424506] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.724 [2024-07-13 06:01:40.424549] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.724 [2024-07-13 06:01:40.424559] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.724 [2024-07-13 06:01:40.424566] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.724 [2024-07-13 06:01:40.424573] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.724 [2024-07-13 06:01:40.424651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.983 [2024-07-13 06:01:40.568979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:48.983 [2024-07-13 06:01:40.623034] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.983 [2024-07-13 06:01:40.655004] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:48.983 [2024-07-13 06:01:40.655202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=85416 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 85416 /var/tmp/bdevperf.sock 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85416 ']' 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:49.546 06:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:14:49.546 "subsystems": [ 00:14:49.546 { 00:14:49.546 "subsystem": "keyring", 00:14:49.546 "config": [ 00:14:49.546 { 00:14:49.546 "method": "keyring_file_add_key", 00:14:49.546 "params": { 00:14:49.546 "name": "key0", 00:14:49.546 "path": "/tmp/tmp.ceMYtF1fqw" 00:14:49.546 } 00:14:49.546 } 00:14:49.546 ] 00:14:49.546 }, 00:14:49.546 { 00:14:49.546 "subsystem": "iobuf", 00:14:49.546 "config": [ 00:14:49.546 { 00:14:49.546 "method": "iobuf_set_options", 00:14:49.546 "params": { 00:14:49.546 "small_pool_count": 8192, 00:14:49.546 "large_pool_count": 1024, 00:14:49.546 "small_bufsize": 8192, 00:14:49.546 "large_bufsize": 135168 00:14:49.546 } 00:14:49.546 } 00:14:49.546 ] 00:14:49.546 }, 00:14:49.546 { 00:14:49.546 "subsystem": "sock", 00:14:49.546 "config": [ 00:14:49.546 { 00:14:49.546 "method": "sock_set_default_impl", 00:14:49.546 "params": { 00:14:49.546 "impl_name": "uring" 00:14:49.546 } 00:14:49.546 }, 00:14:49.546 { 00:14:49.546 "method": "sock_impl_set_options", 00:14:49.546 "params": { 00:14:49.546 "impl_name": "ssl", 00:14:49.546 "recv_buf_size": 4096, 00:14:49.546 "send_buf_size": 4096, 00:14:49.546 "enable_recv_pipe": true, 00:14:49.546 "enable_quickack": false, 00:14:49.546 "enable_placement_id": 0, 00:14:49.546 "enable_zerocopy_send_server": true, 00:14:49.546 "enable_zerocopy_send_client": false, 00:14:49.546 "zerocopy_threshold": 0, 00:14:49.546 "tls_version": 0, 00:14:49.546 "enable_ktls": false 00:14:49.546 } 00:14:49.546 }, 00:14:49.546 { 00:14:49.546 "method": "sock_impl_set_options", 00:14:49.546 "params": { 00:14:49.546 "impl_name": "posix", 00:14:49.546 "recv_buf_size": 2097152, 00:14:49.546 "send_buf_size": 2097152, 00:14:49.546 "enable_recv_pipe": true, 00:14:49.546 "enable_quickack": false, 00:14:49.546 "enable_placement_id": 0, 00:14:49.546 "enable_zerocopy_send_server": true, 00:14:49.546 "enable_zerocopy_send_client": false, 00:14:49.546 "zerocopy_threshold": 0, 00:14:49.546 "tls_version": 0, 00:14:49.546 "enable_ktls": false 00:14:49.546 } 00:14:49.546 }, 00:14:49.546 { 00:14:49.546 "method": "sock_impl_set_options", 00:14:49.546 "params": { 00:14:49.546 "impl_name": "uring", 00:14:49.546 "recv_buf_size": 2097152, 00:14:49.546 "send_buf_size": 2097152, 00:14:49.546 "enable_recv_pipe": true, 00:14:49.546 "enable_quickack": false, 00:14:49.546 "enable_placement_id": 0, 00:14:49.546 "enable_zerocopy_send_server": false, 00:14:49.546 "enable_zerocopy_send_client": false, 00:14:49.546 "zerocopy_threshold": 0, 00:14:49.546 "tls_version": 0, 00:14:49.546 "enable_ktls": false 00:14:49.546 } 00:14:49.546 } 00:14:49.546 ] 00:14:49.546 }, 00:14:49.546 { 00:14:49.546 "subsystem": "vmd", 00:14:49.546 "config": [] 00:14:49.546 }, 00:14:49.546 { 00:14:49.546 "subsystem": "accel", 00:14:49.546 "config": [ 00:14:49.546 { 00:14:49.546 "method": "accel_set_options", 00:14:49.546 "params": { 00:14:49.546 "small_cache_size": 128, 00:14:49.546 "large_cache_size": 16, 00:14:49.546 "task_count": 2048, 00:14:49.546 "sequence_count": 2048, 00:14:49.546 "buf_count": 2048 00:14:49.546 } 00:14:49.546 } 00:14:49.546 ] 00:14:49.546 }, 00:14:49.546 { 00:14:49.546 "subsystem": "bdev", 00:14:49.546 "config": [ 00:14:49.546 { 00:14:49.546 "method": "bdev_set_options", 00:14:49.546 "params": { 00:14:49.546 "bdev_io_pool_size": 65535, 00:14:49.546 "bdev_io_cache_size": 256, 00:14:49.546 "bdev_auto_examine": true, 00:14:49.546 "iobuf_small_cache_size": 128, 00:14:49.546 "iobuf_large_cache_size": 16 00:14:49.546 } 00:14:49.546 }, 00:14:49.546 { 00:14:49.546 "method": "bdev_raid_set_options", 00:14:49.546 "params": { 00:14:49.546 "process_window_size_kb": 1024 00:14:49.546 } 00:14:49.546 }, 00:14:49.546 { 00:14:49.546 "method": "bdev_iscsi_set_options", 00:14:49.546 "params": { 00:14:49.546 "timeout_sec": 30 00:14:49.546 } 00:14:49.546 }, 00:14:49.546 { 00:14:49.546 "method": "bdev_nvme_set_options", 00:14:49.546 "params": { 00:14:49.546 "action_on_timeout": "none", 00:14:49.546 "timeout_us": 0, 00:14:49.546 "timeout_admin_us": 0, 00:14:49.546 "keep_alive_timeout_ms": 10000, 00:14:49.546 "arbitration_burst": 0, 00:14:49.546 "low_priority_weight": 0, 00:14:49.546 "medium_priority_weight": 0, 00:14:49.546 "high_priority_weight": 0, 00:14:49.546 "nvme_adminq_poll_period_us": 10000, 00:14:49.546 "nvme_ioq_poll_period_us": 0, 00:14:49.546 "io_queue_requests": 512, 00:14:49.546 "delay_cmd_submit": true, 00:14:49.546 "transport_retry_count": 4, 00:14:49.546 "bdev_retry_count": 3, 00:14:49.546 "transport_ack_timeout": 0, 00:14:49.546 "ctrlr_loss_timeout_sec": 0, 00:14:49.546 "reconnect_delay_sec": 0, 00:14:49.546 "fast_io_fail_timeout_sec": 0, 00:14:49.546 "disable_auto_failback": false, 00:14:49.546 "generate_uuids": false, 00:14:49.546 "transport_tos": 0, 00:14:49.546 "nvme_error_stat": false, 00:14:49.546 "rdma_srq_size": 0, 00:14:49.546 "io_path_stat": false, 00:14:49.546 "allow_accel_sequence": false, 00:14:49.546 "rdma_max_cq_size": 0, 00:14:49.546 "rdma_cm_event_timeout_ms": 0, 00:14:49.547 "dhchap_digests": [ 00:14:49.547 "sha256", 00:14:49.547 "sha384", 00:14:49.547 "sha512" 00:14:49.547 ], 00:14:49.547 "dhchap_dhgroups": [ 00:14:49.547 "null", 00:14:49.547 "ffdhe2048", 00:14:49.547 "ffdhe3072", 00:14:49.547 "ffdhe4096", 00:14:49.547 "ffdhe6144", 00:14:49.547 "ffdhe8192" 00:14:49.547 ] 00:14:49.547 } 00:14:49.547 }, 00:14:49.547 { 00:14:49.547 "method": "bdev_nvme_attach_controller", 00:14:49.547 "params": { 00:14:49.547 "name": "nvme0", 00:14:49.547 "trtype": "TCP", 00:14:49.547 "adrfam": "IPv4", 00:14:49.547 "traddr": "10.0.0.2", 00:14:49.547 "trsvcid": "4420", 00:14:49.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.547 "prchk_reftag": false, 00:14:49.547 "prchk_guard": false, 00:14:49.547 "ctrlr_loss_timeout_sec": 0, 00:14:49.547 "reconnect_delay_sec": 0, 00:14:49.547 "fast_io_fail_timeout_sec": 0, 00:14:49.547 "psk": "key0", 00:14:49.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.547 "hdgst": false, 00:14:49.547 "ddgst": false 00:14:49.547 } 00:14:49.547 }, 00:14:49.547 { 00:14:49.547 "method": "bdev_nvme_set_hotplug", 00:14:49.547 "params": { 00:14:49.547 "period_us": 100000, 00:14:49.547 "enable": false 00:14:49.547 } 00:14:49.547 }, 00:14:49.547 { 00:14:49.547 "method": "bdev_enable_histogram", 00:14:49.547 "params": { 00:14:49.547 "name": "nvme0n1", 00:14:49.547 "enable": true 00:14:49.547 } 00:14:49.547 }, 00:14:49.547 { 00:14:49.547 "method": "bdev_wait_for_examine" 00:14:49.547 } 00:14:49.547 ] 00:14:49.547 }, 00:14:49.547 { 00:14:49.547 "subsystem": "nbd", 00:14:49.547 "config": [] 00:14:49.547 } 00:14:49.547 ] 00:14:49.547 }' 00:14:49.803 [2024-07-13 06:01:41.318002] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:49.804 [2024-07-13 06:01:41.318101] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85416 ] 00:14:49.804 [2024-07-13 06:01:41.456849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.804 [2024-07-13 06:01:41.499384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.061 [2024-07-13 06:01:41.613866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:50.061 [2024-07-13 06:01:41.644261] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:50.626 06:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.626 06:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:50.626 06:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:50.626 06:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:14:50.884 06:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.884 06:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:51.143 Running I/O for 1 seconds... 00:14:52.076 00:14:52.076 Latency(us) 00:14:52.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.076 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:52.076 Verification LBA range: start 0x0 length 0x2000 00:14:52.076 nvme0n1 : 1.02 4105.37 16.04 0.00 0.00 30900.01 5332.25 28478.37 00:14:52.076 =================================================================================================================== 00:14:52.076 Total : 4105.37 16.04 0.00 0.00 30900.01 5332.25 28478.37 00:14:52.076 0 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:52.076 nvmf_trace.0 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 85416 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85416 ']' 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85416 00:14:52.076 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:52.334 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:52.334 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85416 00:14:52.334 killing process with pid 85416 00:14:52.334 Received shutdown signal, test time was about 1.000000 seconds 00:14:52.334 00:14:52.334 Latency(us) 00:14:52.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.334 =================================================================================================================== 00:14:52.334 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:52.334 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:52.334 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:52.334 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85416' 00:14:52.334 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85416 00:14:52.334 06:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85416 00:14:52.334 06:01:43 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:52.334 06:01:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:52.334 06:01:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:52.334 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:52.334 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:52.334 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:52.334 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:52.334 rmmod nvme_tcp 00:14:52.334 rmmod nvme_fabrics 00:14:52.334 rmmod nvme_keyring 00:14:52.334 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:52.334 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:52.334 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:52.334 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 85384 ']' 00:14:52.334 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 85384 00:14:52.334 06:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85384 ']' 00:14:52.334 06:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85384 00:14:52.334 06:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85384 00:14:52.592 killing process with pid 85384 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85384' 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85384 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85384 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.IbEXrZAizF /tmp/tmp.cqQ33I44lj /tmp/tmp.ceMYtF1fqw 00:14:52.592 00:14:52.592 real 1m14.494s 00:14:52.592 user 1m56.411s 00:14:52.592 sys 0m25.746s 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.592 06:01:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.592 ************************************ 00:14:52.592 END TEST nvmf_tls 00:14:52.592 ************************************ 00:14:52.592 06:01:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:52.592 06:01:44 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:52.592 06:01:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:52.592 06:01:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.592 06:01:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:52.851 ************************************ 00:14:52.851 START TEST nvmf_fips 00:14:52.851 ************************************ 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:52.851 * Looking for test storage... 00:14:52.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.851 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:14:52.852 Error setting digest 00:14:52.852 00020B3B8C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:52.852 00020B3B8C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:52.852 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.853 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:53.110 Cannot find device "nvmf_tgt_br" 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.110 Cannot find device "nvmf_tgt_br2" 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:53.110 Cannot find device "nvmf_tgt_br" 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:53.110 Cannot find device "nvmf_tgt_br2" 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:53.110 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.110 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:53.110 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:53.111 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:53.111 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:53.111 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:53.111 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.111 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:53.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:14:53.368 00:14:53.368 --- 10.0.0.2 ping statistics --- 00:14:53.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.368 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:53.368 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.368 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:14:53.368 00:14:53.368 --- 10.0.0.3 ping statistics --- 00:14:53.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.368 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:14:53.368 00:14:53.368 --- 10.0.0.1 ping statistics --- 00:14:53.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.368 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=85675 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 85675 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85675 ']' 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.368 06:01:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:53.368 [2024-07-13 06:01:44.974167] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:53.368 [2024-07-13 06:01:44.974265] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.626 [2024-07-13 06:01:45.115095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.626 [2024-07-13 06:01:45.153784] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.626 [2024-07-13 06:01:45.153836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.626 [2024-07-13 06:01:45.153863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.626 [2024-07-13 06:01:45.153871] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.626 [2024-07-13 06:01:45.153878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.626 [2024-07-13 06:01:45.153907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.626 [2024-07-13 06:01:45.185327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:54.559 06:01:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.559 06:01:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:54.559 06:01:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:54.559 06:01:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:54.559 06:01:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:54.559 06:01:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.559 06:01:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:54.559 06:01:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:54.559 06:01:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:54.559 06:01:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:54.559 06:01:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:54.559 06:01:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:54.559 06:01:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:54.559 06:01:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:54.559 [2024-07-13 06:01:46.233096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.559 [2024-07-13 06:01:46.249030] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:54.559 [2024-07-13 06:01:46.249240] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.559 [2024-07-13 06:01:46.274834] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:54.559 malloc0 00:14:54.817 06:01:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:54.817 06:01:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85717 00:14:54.817 06:01:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:54.817 06:01:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85717 /var/tmp/bdevperf.sock 00:14:54.817 06:01:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85717 ']' 00:14:54.817 06:01:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.817 06:01:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.817 06:01:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.817 06:01:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.817 06:01:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:54.817 [2024-07-13 06:01:46.384303] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:54.817 [2024-07-13 06:01:46.384424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85717 ] 00:14:54.817 [2024-07-13 06:01:46.522200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.075 [2024-07-13 06:01:46.558203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.075 [2024-07-13 06:01:46.587695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:55.642 06:01:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.642 06:01:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:55.642 06:01:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:55.900 [2024-07-13 06:01:47.483147] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:55.900 [2024-07-13 06:01:47.483314] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:55.900 TLSTESTn1 00:14:55.900 06:01:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:56.158 Running I/O for 10 seconds... 00:15:06.122 00:15:06.122 Latency(us) 00:15:06.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.122 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:06.122 Verification LBA range: start 0x0 length 0x2000 00:15:06.122 TLSTESTn1 : 10.02 4001.03 15.63 0.00 0.00 31922.93 2591.65 21924.77 00:15:06.122 =================================================================================================================== 00:15:06.122 Total : 4001.03 15.63 0.00 0.00 31922.93 2591.65 21924.77 00:15:06.122 0 00:15:06.122 06:01:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:06.122 06:01:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:06.122 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:15:06.122 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:15:06.122 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:06.122 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:06.122 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:06.122 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:06.122 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:06.123 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:06.123 nvmf_trace.0 00:15:06.123 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:15:06.123 06:01:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85717 00:15:06.123 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85717 ']' 00:15:06.123 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85717 00:15:06.123 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:06.123 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.123 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85717 00:15:06.397 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:06.397 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:06.397 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85717' 00:15:06.397 killing process with pid 85717 00:15:06.397 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85717 00:15:06.397 Received shutdown signal, test time was about 10.000000 seconds 00:15:06.397 00:15:06.397 Latency(us) 00:15:06.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.397 =================================================================================================================== 00:15:06.397 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:06.397 [2024-07-13 06:01:57.865138] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:06.397 06:01:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85717 00:15:06.397 06:01:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:06.397 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:06.397 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:06.397 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:06.397 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:06.397 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:06.397 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:06.397 rmmod nvme_tcp 00:15:06.397 rmmod nvme_fabrics 00:15:06.397 rmmod nvme_keyring 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 85675 ']' 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 85675 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85675 ']' 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85675 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85675 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:06.701 killing process with pid 85675 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85675' 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85675 00:15:06.701 [2024-07-13 06:01:58.152576] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85675 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.701 06:01:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.702 06:01:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.702 06:01:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:06.702 06:01:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:06.702 00:15:06.702 real 0m14.019s 00:15:06.702 user 0m19.020s 00:15:06.702 sys 0m5.758s 00:15:06.702 06:01:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:06.702 06:01:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:06.702 ************************************ 00:15:06.702 END TEST nvmf_fips 00:15:06.702 ************************************ 00:15:06.702 06:01:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:06.702 06:01:58 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:15:06.702 06:01:58 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:06.702 06:01:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:06.702 06:01:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.702 06:01:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:06.702 ************************************ 00:15:06.702 START TEST nvmf_fuzz 00:15:06.702 ************************************ 00:15:06.702 06:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:06.978 * Looking for test storage... 00:15:06.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:06.978 Cannot find device "nvmf_tgt_br" 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:06.978 Cannot find device "nvmf_tgt_br2" 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:06.978 Cannot find device "nvmf_tgt_br" 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:06.978 Cannot find device "nvmf_tgt_br2" 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:06.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:06.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:06.978 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:06.979 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:06.979 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:06.979 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:06.979 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:06.979 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:06.979 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:06.979 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:06.979 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:06.979 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:06.979 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:07.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:15:07.237 00:15:07.237 --- 10.0.0.2 ping statistics --- 00:15:07.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.237 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:07.237 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:07.237 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:07.237 00:15:07.237 --- 10.0.0.3 ping statistics --- 00:15:07.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.237 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:07.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:15:07.237 00:15:07.237 --- 10.0.0.1 ping statistics --- 00:15:07.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.237 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=86039 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 86039 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 86039 ']' 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.237 06:01:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.171 Malloc0 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.171 06:01:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:15:08.172 06:01:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:15:08.430 Shutting down the fuzz application 00:15:08.430 06:02:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:15:08.687 Shutting down the fuzz application 00:15:08.687 06:02:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.687 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.687 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.687 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.687 06:02:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:08.687 06:02:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:15:08.687 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:08.687 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:08.945 rmmod nvme_tcp 00:15:08.945 rmmod nvme_fabrics 00:15:08.945 rmmod nvme_keyring 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 86039 ']' 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 86039 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 86039 ']' 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 86039 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86039 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:08.945 killing process with pid 86039 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86039' 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 86039 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 86039 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.945 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.203 06:02:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:09.203 06:02:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:15:09.203 00:15:09.203 real 0m2.316s 00:15:09.203 user 0m2.334s 00:15:09.203 sys 0m0.535s 00:15:09.203 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.203 ************************************ 00:15:09.203 END TEST nvmf_fuzz 00:15:09.203 ************************************ 00:15:09.203 06:02:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:09.203 06:02:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:09.203 06:02:00 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:09.203 06:02:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:09.203 06:02:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.203 06:02:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:09.203 ************************************ 00:15:09.203 START TEST nvmf_multiconnection 00:15:09.203 ************************************ 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:09.203 * Looking for test storage... 00:15:09.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:09.203 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:09.204 Cannot find device "nvmf_tgt_br" 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:09.204 Cannot find device "nvmf_tgt_br2" 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:09.204 Cannot find device "nvmf_tgt_br" 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:09.204 Cannot find device "nvmf_tgt_br2" 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:15:09.204 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:09.461 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:09.461 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:09.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:09.461 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:15:09.461 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:09.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:09.461 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:15:09.461 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:09.461 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:09.461 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:09.461 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:09.461 06:02:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:09.461 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:09.461 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:09.461 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:09.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:15:09.462 00:15:09.462 --- 10.0.0.2 ping statistics --- 00:15:09.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.462 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:09.462 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:09.462 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:09.462 00:15:09.462 --- 10.0.0.3 ping statistics --- 00:15:09.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.462 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:09.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:09.462 00:15:09.462 --- 10.0.0.1 ping statistics --- 00:15:09.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.462 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:09.462 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:09.720 06:02:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:15:09.720 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:09.720 06:02:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:09.720 06:02:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:09.720 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=86229 00:15:09.720 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 86229 00:15:09.720 06:02:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 86229 ']' 00:15:09.720 06:02:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.720 06:02:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:09.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.720 06:02:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.720 06:02:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.720 06:02:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.720 06:02:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:09.720 [2024-07-13 06:02:01.255457] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:09.720 [2024-07-13 06:02:01.255567] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.720 [2024-07-13 06:02:01.391027] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.720 [2024-07-13 06:02:01.427394] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.720 [2024-07-13 06:02:01.427467] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.720 [2024-07-13 06:02:01.427478] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.720 [2024-07-13 06:02:01.427485] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.720 [2024-07-13 06:02:01.427491] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.720 [2024-07-13 06:02:01.427560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.720 [2024-07-13 06:02:01.427937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.720 [2024-07-13 06:02:01.428573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.720 [2024-07-13 06:02:01.428581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.978 [2024-07-13 06:02:01.459628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.544 [2024-07-13 06:02:02.238625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.544 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.802 Malloc1 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.802 [2024-07-13 06:02:02.305646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.802 Malloc2 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.802 Malloc3 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.802 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 Malloc4 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 Malloc5 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 Malloc6 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.803 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.061 Malloc7 00:15:11.061 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.061 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:15:11.061 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.061 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.061 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.061 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 Malloc8 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 Malloc9 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 Malloc10 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 Malloc11 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:11.062 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:11.319 06:02:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:15:11.319 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:11.319 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.319 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:11.319 06:02:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:13.226 06:02:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:13.226 06:02:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:15:13.226 06:02:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:13.226 06:02:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:13.226 06:02:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.227 06:02:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:13.227 06:02:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:13.227 06:02:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:15:13.484 06:02:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:15:13.484 06:02:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:13.484 06:02:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.484 06:02:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:13.484 06:02:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:15.387 06:02:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:15.387 06:02:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:15.387 06:02:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:15:15.387 06:02:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:15.387 06:02:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.387 06:02:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:15.387 06:02:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:15.387 06:02:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:15:15.644 06:02:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:15:15.644 06:02:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:15.644 06:02:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.644 06:02:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:15.644 06:02:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:17.541 06:02:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:17.541 06:02:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:17.541 06:02:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:15:17.541 06:02:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:17.541 06:02:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.541 06:02:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:17.541 06:02:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:17.541 06:02:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:15:17.798 06:02:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:15:17.798 06:02:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:17.798 06:02:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:17.798 06:02:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:17.798 06:02:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:19.701 06:02:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:19.701 06:02:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:19.701 06:02:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:15:19.701 06:02:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:19.701 06:02:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:19.701 06:02:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:19.701 06:02:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:19.701 06:02:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:15:19.958 06:02:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:15:19.958 06:02:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:19.958 06:02:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:19.958 06:02:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:19.958 06:02:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:21.862 06:02:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:21.862 06:02:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:21.862 06:02:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:15:21.862 06:02:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:21.862 06:02:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:21.862 06:02:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:21.862 06:02:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:21.862 06:02:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:15:22.119 06:02:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:15:22.119 06:02:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:22.119 06:02:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:22.119 06:02:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:22.119 06:02:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:24.016 06:02:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:24.016 06:02:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:24.016 06:02:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:15:24.016 06:02:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:24.016 06:02:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:24.016 06:02:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:24.016 06:02:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:24.016 06:02:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:15:24.274 06:02:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:15:24.274 06:02:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:24.274 06:02:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:24.274 06:02:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:24.274 06:02:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:26.182 06:02:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:26.182 06:02:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:26.182 06:02:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:15:26.182 06:02:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:26.182 06:02:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:26.182 06:02:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:26.182 06:02:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:26.182 06:02:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:15:26.454 06:02:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:15:26.454 06:02:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:26.454 06:02:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:26.454 06:02:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:26.454 06:02:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:28.352 06:02:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:28.352 06:02:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:15:28.352 06:02:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:28.352 06:02:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:28.352 06:02:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:28.352 06:02:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:28.352 06:02:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:28.352 06:02:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:15:28.609 06:02:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:15:28.609 06:02:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:28.609 06:02:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:28.609 06:02:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:28.609 06:02:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:30.516 06:02:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:30.516 06:02:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:30.516 06:02:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:15:30.516 06:02:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:30.516 06:02:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:30.516 06:02:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:30.516 06:02:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:30.516 06:02:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:15:30.773 06:02:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:15:30.773 06:02:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:30.773 06:02:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:30.773 06:02:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:30.773 06:02:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:32.673 06:02:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:32.673 06:02:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:32.673 06:02:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:15:32.673 06:02:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:32.673 06:02:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:32.673 06:02:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:32.673 06:02:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:32.673 06:02:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:15:32.942 06:02:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:15:32.942 06:02:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:32.942 06:02:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:32.942 06:02:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:32.942 06:02:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:34.843 06:02:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:34.843 06:02:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:34.843 06:02:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:15:34.843 06:02:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:34.843 06:02:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:34.843 06:02:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:34.843 06:02:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:15:34.843 [global] 00:15:34.843 thread=1 00:15:34.843 invalidate=1 00:15:34.843 rw=read 00:15:34.843 time_based=1 00:15:34.843 runtime=10 00:15:34.843 ioengine=libaio 00:15:34.843 direct=1 00:15:34.843 bs=262144 00:15:34.843 iodepth=64 00:15:34.843 norandommap=1 00:15:34.843 numjobs=1 00:15:34.843 00:15:34.843 [job0] 00:15:34.843 filename=/dev/nvme0n1 00:15:34.843 [job1] 00:15:34.843 filename=/dev/nvme10n1 00:15:34.843 [job2] 00:15:34.843 filename=/dev/nvme1n1 00:15:34.843 [job3] 00:15:34.843 filename=/dev/nvme2n1 00:15:34.843 [job4] 00:15:34.843 filename=/dev/nvme3n1 00:15:34.843 [job5] 00:15:34.843 filename=/dev/nvme4n1 00:15:34.843 [job6] 00:15:34.843 filename=/dev/nvme5n1 00:15:34.843 [job7] 00:15:34.843 filename=/dev/nvme6n1 00:15:34.843 [job8] 00:15:34.843 filename=/dev/nvme7n1 00:15:34.843 [job9] 00:15:34.843 filename=/dev/nvme8n1 00:15:34.843 [job10] 00:15:34.843 filename=/dev/nvme9n1 00:15:35.101 Could not set queue depth (nvme0n1) 00:15:35.101 Could not set queue depth (nvme10n1) 00:15:35.101 Could not set queue depth (nvme1n1) 00:15:35.101 Could not set queue depth (nvme2n1) 00:15:35.101 Could not set queue depth (nvme3n1) 00:15:35.101 Could not set queue depth (nvme4n1) 00:15:35.101 Could not set queue depth (nvme5n1) 00:15:35.101 Could not set queue depth (nvme6n1) 00:15:35.101 Could not set queue depth (nvme7n1) 00:15:35.101 Could not set queue depth (nvme8n1) 00:15:35.101 Could not set queue depth (nvme9n1) 00:15:35.101 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:35.101 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:35.101 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:35.101 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:35.101 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:35.101 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:35.101 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:35.101 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:35.101 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:35.101 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:35.101 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:35.101 fio-3.35 00:15:35.101 Starting 11 threads 00:15:47.297 00:15:47.297 job0: (groupid=0, jobs=1): err= 0: pid=86682: Sat Jul 13 06:02:37 2024 00:15:47.297 read: IOPS=651, BW=163MiB/s (171MB/s)(1642MiB/10079msec) 00:15:47.297 slat (usec): min=21, max=41052, avg=1518.26, stdev=3355.06 00:15:47.297 clat (msec): min=47, max=175, avg=96.59, stdev=11.86 00:15:47.297 lat (msec): min=47, max=175, avg=98.11, stdev=11.92 00:15:47.297 clat percentiles (msec): 00:15:47.297 | 1.00th=[ 77], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 89], 00:15:47.297 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 97], 00:15:47.297 | 70.00th=[ 101], 80.00th=[ 104], 90.00th=[ 109], 95.00th=[ 117], 00:15:47.297 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 167], 99.95th=[ 167], 00:15:47.297 | 99.99th=[ 176] 00:15:47.297 bw ( KiB/s): min=128000, max=180736, per=8.24%, avg=166468.20, stdev=13123.06, samples=20 00:15:47.297 iops : min= 500, max= 706, avg=650.20, stdev=51.24, samples=20 00:15:47.297 lat (msec) : 50=0.05%, 100=70.70%, 250=29.25% 00:15:47.297 cpu : usr=0.35%, sys=2.72%, ctx=1483, majf=0, minf=4097 00:15:47.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:15:47.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:47.297 issued rwts: total=6567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.297 job1: (groupid=0, jobs=1): err= 0: pid=86683: Sat Jul 13 06:02:37 2024 00:15:47.297 read: IOPS=524, BW=131MiB/s (138MB/s)(1321MiB/10072msec) 00:15:47.297 slat (usec): min=21, max=115709, avg=1866.22, stdev=4491.35 00:15:47.297 clat (msec): min=34, max=249, avg=119.98, stdev=18.81 00:15:47.297 lat (msec): min=34, max=249, avg=121.84, stdev=19.18 00:15:47.297 clat percentiles (msec): 00:15:47.297 | 1.00th=[ 77], 5.00th=[ 89], 10.00th=[ 94], 20.00th=[ 104], 00:15:47.297 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 125], 00:15:47.297 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 140], 95.00th=[ 155], 00:15:47.297 | 99.00th=[ 171], 99.50th=[ 174], 99.90th=[ 178], 99.95th=[ 182], 00:15:47.297 | 99.99th=[ 251] 00:15:47.297 bw ( KiB/s): min=84136, max=172544, per=6.62%, avg=133639.45, stdev=21409.15, samples=20 00:15:47.297 iops : min= 328, max= 674, avg=521.90, stdev=83.71, samples=20 00:15:47.297 lat (msec) : 50=0.25%, 100=17.56%, 250=82.19% 00:15:47.297 cpu : usr=0.32%, sys=1.92%, ctx=1280, majf=0, minf=4097 00:15:47.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:47.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:47.297 issued rwts: total=5284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.297 job2: (groupid=0, jobs=1): err= 0: pid=86684: Sat Jul 13 06:02:37 2024 00:15:47.297 read: IOPS=683, BW=171MiB/s (179MB/s)(1722MiB/10072msec) 00:15:47.297 slat (usec): min=19, max=51733, avg=1435.30, stdev=3294.42 00:15:47.297 clat (msec): min=4, max=170, avg=92.05, stdev=16.83 00:15:47.297 lat (msec): min=4, max=177, avg=93.49, stdev=17.00 00:15:47.297 clat percentiles (msec): 00:15:47.297 | 1.00th=[ 56], 5.00th=[ 64], 10.00th=[ 68], 20.00th=[ 82], 00:15:47.297 | 30.00th=[ 87], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 96], 00:15:47.297 | 70.00th=[ 100], 80.00th=[ 104], 90.00th=[ 111], 95.00th=[ 117], 00:15:47.297 | 99.00th=[ 134], 99.50th=[ 142], 99.90th=[ 171], 99.95th=[ 171], 00:15:47.297 | 99.99th=[ 171] 00:15:47.297 bw ( KiB/s): min=123126, max=248832, per=8.65%, avg=174664.50, stdev=26331.74, samples=20 00:15:47.297 iops : min= 480, max= 972, avg=682.20, stdev=102.97, samples=20 00:15:47.297 lat (msec) : 10=0.09%, 20=0.20%, 50=0.55%, 100=70.68%, 250=28.48% 00:15:47.297 cpu : usr=0.49%, sys=2.48%, ctx=1584, majf=0, minf=4097 00:15:47.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:47.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:47.297 issued rwts: total=6886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.297 job3: (groupid=0, jobs=1): err= 0: pid=86686: Sat Jul 13 06:02:37 2024 00:15:47.297 read: IOPS=646, BW=162MiB/s (170MB/s)(1631MiB/10084msec) 00:15:47.297 slat (usec): min=17, max=77232, avg=1529.65, stdev=3550.38 00:15:47.297 clat (msec): min=19, max=174, avg=97.24, stdev=13.75 00:15:47.297 lat (msec): min=20, max=183, avg=98.77, stdev=13.83 00:15:47.297 clat percentiles (msec): 00:15:47.297 | 1.00th=[ 77], 5.00th=[ 82], 10.00th=[ 85], 20.00th=[ 88], 00:15:47.297 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 97], 00:15:47.297 | 70.00th=[ 101], 80.00th=[ 105], 90.00th=[ 111], 95.00th=[ 125], 00:15:47.297 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 165], 99.95th=[ 171], 00:15:47.297 | 99.99th=[ 176] 00:15:47.297 bw ( KiB/s): min=116224, max=180736, per=8.19%, avg=165384.65, stdev=15104.95, samples=20 00:15:47.297 iops : min= 454, max= 706, avg=646.00, stdev=59.00, samples=20 00:15:47.297 lat (msec) : 20=0.02%, 50=0.41%, 100=69.50%, 250=30.07% 00:15:47.297 cpu : usr=0.17%, sys=2.07%, ctx=1533, majf=0, minf=4097 00:15:47.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:15:47.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:47.297 issued rwts: total=6524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.297 job4: (groupid=0, jobs=1): err= 0: pid=86688: Sat Jul 13 06:02:37 2024 00:15:47.297 read: IOPS=1598, BW=400MiB/s (419MB/s)(4003MiB/10015msec) 00:15:47.297 slat (usec): min=16, max=16024, avg=620.87, stdev=1419.02 00:15:47.297 clat (usec): min=12699, max=87016, avg=39355.20, stdev=11922.81 00:15:47.297 lat (usec): min=17392, max=87052, avg=39976.07, stdev=12095.72 00:15:47.297 clat percentiles (usec): 00:15:47.297 | 1.00th=[29754], 5.00th=[31065], 10.00th=[31851], 20.00th=[32637], 00:15:47.297 | 30.00th=[33424], 40.00th=[33817], 50.00th=[34341], 60.00th=[34866], 00:15:47.297 | 70.00th=[35390], 80.00th=[38011], 90.00th=[63177], 95.00th=[66323], 00:15:47.297 | 99.00th=[71828], 99.50th=[72877], 99.90th=[77071], 99.95th=[80217], 00:15:47.297 | 99.99th=[83362] 00:15:47.297 bw ( KiB/s): min=244736, max=484352, per=20.22%, avg=408224.60, stdev=105404.30, samples=20 00:15:47.297 iops : min= 956, max= 1892, avg=1594.60, stdev=411.72, samples=20 00:15:47.297 lat (msec) : 20=0.05%, 50=81.76%, 100=18.19% 00:15:47.297 cpu : usr=0.57%, sys=4.68%, ctx=3303, majf=0, minf=4097 00:15:47.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:47.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:47.297 issued rwts: total=16013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.297 job5: (groupid=0, jobs=1): err= 0: pid=86691: Sat Jul 13 06:02:37 2024 00:15:47.297 read: IOPS=671, BW=168MiB/s (176MB/s)(1691MiB/10076msec) 00:15:47.297 slat (usec): min=17, max=88343, avg=1457.86, stdev=3569.73 00:15:47.297 clat (msec): min=9, max=182, avg=93.81, stdev=21.32 00:15:47.297 lat (msec): min=9, max=229, avg=95.27, stdev=21.65 00:15:47.297 clat percentiles (msec): 00:15:47.297 | 1.00th=[ 54], 5.00th=[ 64], 10.00th=[ 68], 20.00th=[ 83], 00:15:47.297 | 30.00th=[ 88], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 96], 00:15:47.297 | 70.00th=[ 100], 80.00th=[ 103], 90.00th=[ 110], 95.00th=[ 144], 00:15:47.297 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 176], 99.95th=[ 176], 00:15:47.297 | 99.99th=[ 182] 00:15:47.297 bw ( KiB/s): min=103936, max=242688, per=8.49%, avg=171477.60, stdev=30106.77, samples=20 00:15:47.297 iops : min= 406, max= 948, avg=669.80, stdev=117.61, samples=20 00:15:47.297 lat (msec) : 10=0.01%, 20=0.12%, 50=0.55%, 100=73.16%, 250=26.16% 00:15:47.297 cpu : usr=0.32%, sys=2.15%, ctx=1613, majf=0, minf=4097 00:15:47.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:47.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:47.297 issued rwts: total=6763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.297 job6: (groupid=0, jobs=1): err= 0: pid=86692: Sat Jul 13 06:02:37 2024 00:15:47.297 read: IOPS=528, BW=132MiB/s (138MB/s)(1331MiB/10079msec) 00:15:47.297 slat (usec): min=20, max=53739, avg=1853.58, stdev=4100.65 00:15:47.297 clat (msec): min=12, max=177, avg=119.15, stdev=17.12 00:15:47.297 lat (msec): min=16, max=186, avg=121.00, stdev=17.44 00:15:47.297 clat percentiles (msec): 00:15:47.297 | 1.00th=[ 82], 5.00th=[ 90], 10.00th=[ 94], 20.00th=[ 104], 00:15:47.297 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 125], 00:15:47.297 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 138], 95.00th=[ 148], 00:15:47.297 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 169], 99.95th=[ 169], 00:15:47.297 | 99.99th=[ 178] 00:15:47.297 bw ( KiB/s): min=104960, max=169984, per=6.67%, avg=134643.30, stdev=17926.20, samples=20 00:15:47.297 iops : min= 410, max= 664, avg=525.95, stdev=70.02, samples=20 00:15:47.297 lat (msec) : 20=0.13%, 50=0.08%, 100=17.51%, 250=82.28% 00:15:47.297 cpu : usr=0.36%, sys=2.35%, ctx=1311, majf=0, minf=4097 00:15:47.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:47.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:47.297 issued rwts: total=5323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.297 job7: (groupid=0, jobs=1): err= 0: pid=86693: Sat Jul 13 06:02:37 2024 00:15:47.297 read: IOPS=526, BW=132MiB/s (138MB/s)(1327MiB/10080msec) 00:15:47.297 slat (usec): min=21, max=62482, avg=1878.98, stdev=4269.12 00:15:47.298 clat (msec): min=33, max=179, avg=119.47, stdev=16.91 00:15:47.298 lat (msec): min=34, max=202, avg=121.35, stdev=17.26 00:15:47.298 clat percentiles (msec): 00:15:47.298 | 1.00th=[ 85], 5.00th=[ 90], 10.00th=[ 93], 20.00th=[ 102], 00:15:47.298 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 125], 00:15:47.298 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 138], 95.00th=[ 148], 00:15:47.298 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 178], 99.95th=[ 180], 00:15:47.298 | 99.99th=[ 180] 00:15:47.298 bw ( KiB/s): min=99328, max=174080, per=6.65%, avg=134284.45, stdev=19318.61, samples=20 00:15:47.298 iops : min= 388, max= 680, avg=524.50, stdev=75.47, samples=20 00:15:47.298 lat (msec) : 50=0.09%, 100=18.80%, 250=81.11% 00:15:47.298 cpu : usr=0.32%, sys=1.99%, ctx=1291, majf=0, minf=4097 00:15:47.298 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:47.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:47.298 issued rwts: total=5309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.298 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.298 job8: (groupid=0, jobs=1): err= 0: pid=86696: Sat Jul 13 06:02:37 2024 00:15:47.298 read: IOPS=524, BW=131MiB/s (137MB/s)(1321MiB/10078msec) 00:15:47.298 slat (usec): min=21, max=72550, avg=1887.33, stdev=4403.46 00:15:47.298 clat (msec): min=50, max=194, avg=120.03, stdev=18.24 00:15:47.298 lat (msec): min=50, max=201, avg=121.92, stdev=18.57 00:15:47.298 clat percentiles (msec): 00:15:47.298 | 1.00th=[ 84], 5.00th=[ 90], 10.00th=[ 94], 20.00th=[ 102], 00:15:47.298 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 125], 00:15:47.298 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 140], 95.00th=[ 150], 00:15:47.298 | 99.00th=[ 167], 99.50th=[ 171], 99.90th=[ 178], 99.95th=[ 180], 00:15:47.298 | 99.99th=[ 194] 00:15:47.298 bw ( KiB/s): min=100864, max=172032, per=6.62%, avg=133670.15, stdev=18694.46, samples=20 00:15:47.298 iops : min= 394, max= 672, avg=522.10, stdev=73.03, samples=20 00:15:47.298 lat (msec) : 100=18.98%, 250=81.02% 00:15:47.298 cpu : usr=0.24%, sys=2.19%, ctx=1236, majf=0, minf=4097 00:15:47.298 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:47.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:47.298 issued rwts: total=5285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.298 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.298 job9: (groupid=0, jobs=1): err= 0: pid=86701: Sat Jul 13 06:02:37 2024 00:15:47.298 read: IOPS=900, BW=225MiB/s (236MB/s)(2254MiB/10016msec) 00:15:47.298 slat (usec): min=20, max=22768, avg=1047.10, stdev=2545.64 00:15:47.298 clat (msec): min=6, max=123, avg=69.97, stdev=25.28 00:15:47.298 lat (msec): min=6, max=127, avg=71.02, stdev=25.68 00:15:47.298 clat percentiles (msec): 00:15:47.298 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 36], 00:15:47.298 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 84], 00:15:47.298 | 70.00th=[ 90], 80.00th=[ 95], 90.00th=[ 102], 95.00th=[ 106], 00:15:47.298 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 122], 99.95th=[ 123], 00:15:47.298 | 99.99th=[ 125] 00:15:47.298 bw ( KiB/s): min=164534, max=483328, per=11.35%, avg=229129.50, stdev=92833.59, samples=20 00:15:47.298 iops : min= 642, max= 1888, avg=894.90, stdev=362.73, samples=20 00:15:47.298 lat (msec) : 10=0.04%, 20=0.42%, 50=23.65%, 100=64.06%, 250=11.82% 00:15:47.298 cpu : usr=0.55%, sys=3.20%, ctx=2005, majf=0, minf=4097 00:15:47.298 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:47.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:47.298 issued rwts: total=9016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.298 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.298 job10: (groupid=0, jobs=1): err= 0: pid=86702: Sat Jul 13 06:02:37 2024 00:15:47.298 read: IOPS=651, BW=163MiB/s (171MB/s)(1643MiB/10079msec) 00:15:47.298 slat (usec): min=21, max=66984, avg=1498.04, stdev=3391.92 00:15:47.298 clat (msec): min=52, max=170, avg=96.55, stdev=12.00 00:15:47.298 lat (msec): min=53, max=177, avg=98.05, stdev=12.13 00:15:47.298 clat percentiles (msec): 00:15:47.298 | 1.00th=[ 70], 5.00th=[ 81], 10.00th=[ 85], 20.00th=[ 89], 00:15:47.298 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 99], 00:15:47.298 | 70.00th=[ 101], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 120], 00:15:47.298 | 99.00th=[ 134], 99.50th=[ 138], 99.90th=[ 167], 99.95th=[ 171], 00:15:47.298 | 99.99th=[ 171] 00:15:47.298 bw ( KiB/s): min=133386, max=181760, per=8.25%, avg=166575.95, stdev=12357.63, samples=20 00:15:47.298 iops : min= 521, max= 710, avg=650.65, stdev=48.28, samples=20 00:15:47.298 lat (msec) : 100=68.07%, 250=31.93% 00:15:47.298 cpu : usr=0.38%, sys=2.95%, ctx=1546, majf=0, minf=4097 00:15:47.298 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:15:47.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:47.298 issued rwts: total=6570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.298 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.298 00:15:47.298 Run status group 0 (all jobs): 00:15:47.298 READ: bw=1972MiB/s (2068MB/s), 131MiB/s-400MiB/s (137MB/s-419MB/s), io=19.4GiB (20.8GB), run=10015-10084msec 00:15:47.298 00:15:47.298 Disk stats (read/write): 00:15:47.298 nvme0n1: ios=12937/0, merge=0/0, ticks=1227206/0, in_queue=1227206, util=97.26% 00:15:47.298 nvme10n1: ios=10379/0, merge=0/0, ticks=1223875/0, in_queue=1223875, util=97.38% 00:15:47.298 nvme1n1: ios=13592/0, merge=0/0, ticks=1227458/0, in_queue=1227458, util=97.55% 00:15:47.298 nvme2n1: ios=12872/0, merge=0/0, ticks=1229837/0, in_queue=1229837, util=97.71% 00:15:47.298 nvme3n1: ios=31817/0, merge=0/0, ticks=1232936/0, in_queue=1232936, util=97.76% 00:15:47.298 nvme4n1: ios=13330/0, merge=0/0, ticks=1227650/0, in_queue=1227650, util=98.19% 00:15:47.298 nvme5n1: ios=10481/0, merge=0/0, ticks=1226293/0, in_queue=1226293, util=98.29% 00:15:47.298 nvme6n1: ios=10437/0, merge=0/0, ticks=1225201/0, in_queue=1225201, util=98.39% 00:15:47.298 nvme7n1: ios=10400/0, merge=0/0, ticks=1225874/0, in_queue=1225874, util=98.86% 00:15:47.298 nvme8n1: ios=17812/0, merge=0/0, ticks=1233330/0, in_queue=1233330, util=98.98% 00:15:47.298 nvme9n1: ios=12963/0, merge=0/0, ticks=1230215/0, in_queue=1230215, util=99.10% 00:15:47.298 06:02:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:15:47.298 [global] 00:15:47.298 thread=1 00:15:47.298 invalidate=1 00:15:47.298 rw=randwrite 00:15:47.298 time_based=1 00:15:47.298 runtime=10 00:15:47.298 ioengine=libaio 00:15:47.298 direct=1 00:15:47.298 bs=262144 00:15:47.298 iodepth=64 00:15:47.298 norandommap=1 00:15:47.298 numjobs=1 00:15:47.298 00:15:47.298 [job0] 00:15:47.298 filename=/dev/nvme0n1 00:15:47.298 [job1] 00:15:47.298 filename=/dev/nvme10n1 00:15:47.298 [job2] 00:15:47.298 filename=/dev/nvme1n1 00:15:47.298 [job3] 00:15:47.298 filename=/dev/nvme2n1 00:15:47.298 [job4] 00:15:47.298 filename=/dev/nvme3n1 00:15:47.298 [job5] 00:15:47.298 filename=/dev/nvme4n1 00:15:47.298 [job6] 00:15:47.298 filename=/dev/nvme5n1 00:15:47.298 [job7] 00:15:47.298 filename=/dev/nvme6n1 00:15:47.298 [job8] 00:15:47.298 filename=/dev/nvme7n1 00:15:47.298 [job9] 00:15:47.298 filename=/dev/nvme8n1 00:15:47.298 [job10] 00:15:47.298 filename=/dev/nvme9n1 00:15:47.298 Could not set queue depth (nvme0n1) 00:15:47.298 Could not set queue depth (nvme10n1) 00:15:47.298 Could not set queue depth (nvme1n1) 00:15:47.298 Could not set queue depth (nvme2n1) 00:15:47.298 Could not set queue depth (nvme3n1) 00:15:47.298 Could not set queue depth (nvme4n1) 00:15:47.298 Could not set queue depth (nvme5n1) 00:15:47.298 Could not set queue depth (nvme6n1) 00:15:47.298 Could not set queue depth (nvme7n1) 00:15:47.298 Could not set queue depth (nvme8n1) 00:15:47.298 Could not set queue depth (nvme9n1) 00:15:47.298 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:47.298 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:47.298 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:47.298 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:47.298 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:47.298 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:47.298 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:47.298 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:47.298 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:47.298 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:47.298 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:47.298 fio-3.35 00:15:47.298 Starting 11 threads 00:15:57.271 00:15:57.271 job0: (groupid=0, jobs=1): err= 0: pid=86899: Sat Jul 13 06:02:48 2024 00:15:57.271 write: IOPS=449, BW=112MiB/s (118MB/s)(1140MiB/10146msec); 0 zone resets 00:15:57.271 slat (usec): min=17, max=22930, avg=2187.51, stdev=3783.23 00:15:57.271 clat (msec): min=7, max=296, avg=140.11, stdev=20.96 00:15:57.271 lat (msec): min=7, max=296, avg=142.30, stdev=20.91 00:15:57.271 clat percentiles (msec): 00:15:57.271 | 1.00th=[ 105], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 126], 00:15:57.271 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 150], 00:15:57.271 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 161], 00:15:57.271 | 99.00th=[ 190], 99.50th=[ 239], 99.90th=[ 288], 99.95th=[ 288], 00:15:57.271 | 99.99th=[ 296] 00:15:57.271 bw ( KiB/s): min=102400, max=131334, per=8.29%, avg=115262.80, stdev=12488.47, samples=20 00:15:57.271 iops : min= 400, max= 513, avg=449.90, stdev=48.91, samples=20 00:15:57.271 lat (msec) : 10=0.09%, 20=0.09%, 50=0.26%, 100=0.53%, 250=98.64% 00:15:57.271 lat (msec) : 500=0.39% 00:15:57.271 cpu : usr=0.88%, sys=1.34%, ctx=5581, majf=0, minf=1 00:15:57.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:57.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:57.271 issued rwts: total=0,4561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.271 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:57.271 job1: (groupid=0, jobs=1): err= 0: pid=86900: Sat Jul 13 06:02:48 2024 00:15:57.271 write: IOPS=489, BW=122MiB/s (128MB/s)(1240MiB/10145msec); 0 zone resets 00:15:57.271 slat (usec): min=17, max=12866, avg=1981.52, stdev=3483.08 00:15:57.271 clat (msec): min=15, max=290, avg=128.84, stdev=25.22 00:15:57.271 lat (msec): min=15, max=290, avg=130.82, stdev=25.37 00:15:57.271 clat percentiles (msec): 00:15:57.271 | 1.00th=[ 78], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 117], 00:15:57.271 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 127], 00:15:57.271 | 70.00th=[ 146], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 157], 00:15:57.271 | 99.00th=[ 176], 99.50th=[ 234], 99.90th=[ 279], 99.95th=[ 279], 00:15:57.271 | 99.99th=[ 292] 00:15:57.271 bw ( KiB/s): min=102706, max=178176, per=9.01%, avg=125354.55, stdev=21477.25, samples=20 00:15:57.272 iops : min= 401, max= 696, avg=489.60, stdev=83.90, samples=20 00:15:57.272 lat (msec) : 20=0.08%, 50=0.48%, 100=13.20%, 250=85.87%, 500=0.36% 00:15:57.272 cpu : usr=0.77%, sys=1.33%, ctx=7180, majf=0, minf=1 00:15:57.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:15:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:57.272 issued rwts: total=0,4961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:57.272 job2: (groupid=0, jobs=1): err= 0: pid=86913: Sat Jul 13 06:02:48 2024 00:15:57.272 write: IOPS=615, BW=154MiB/s (161MB/s)(1561MiB/10138msec); 0 zone resets 00:15:57.272 slat (usec): min=17, max=43101, avg=1590.10, stdev=2975.45 00:15:57.272 clat (msec): min=12, max=285, avg=102.27, stdev=37.52 00:15:57.272 lat (msec): min=14, max=285, avg=103.86, stdev=37.97 00:15:57.272 clat percentiles (msec): 00:15:57.272 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 73], 00:15:57.272 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 93], 00:15:57.272 | 70.00th=[ 97], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 157], 00:15:57.272 | 99.00th=[ 171], 99.50th=[ 211], 99.90th=[ 268], 99.95th=[ 275], 00:15:57.272 | 99.99th=[ 288] 00:15:57.272 bw ( KiB/s): min=102912, max=299008, per=11.38%, avg=158275.60, stdev=57363.74, samples=20 00:15:57.272 iops : min= 402, max= 1168, avg=618.25, stdev=224.08, samples=20 00:15:57.272 lat (msec) : 20=0.06%, 50=0.13%, 100=69.85%, 250=29.74%, 500=0.22% 00:15:57.272 cpu : usr=1.01%, sys=1.59%, ctx=7953, majf=0, minf=1 00:15:57.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:57.272 issued rwts: total=0,6245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:57.272 job3: (groupid=0, jobs=1): err= 0: pid=86914: Sat Jul 13 06:02:48 2024 00:15:57.272 write: IOPS=486, BW=122MiB/s (128MB/s)(1233MiB/10138msec); 0 zone resets 00:15:57.272 slat (usec): min=18, max=12043, avg=2023.24, stdev=3524.12 00:15:57.272 clat (msec): min=6, max=285, avg=129.48, stdev=25.48 00:15:57.272 lat (msec): min=6, max=285, avg=131.51, stdev=25.62 00:15:57.272 clat percentiles (msec): 00:15:57.272 | 1.00th=[ 73], 5.00th=[ 87], 10.00th=[ 91], 20.00th=[ 117], 00:15:57.272 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 128], 00:15:57.272 | 70.00th=[ 146], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 157], 00:15:57.272 | 99.00th=[ 174], 99.50th=[ 230], 99.90th=[ 275], 99.95th=[ 275], 00:15:57.272 | 99.99th=[ 288] 00:15:57.272 bw ( KiB/s): min=102912, max=180072, per=8.96%, avg=124664.40, stdev=22114.06, samples=20 00:15:57.272 iops : min= 402, max= 703, avg=486.95, stdev=86.33, samples=20 00:15:57.272 lat (msec) : 10=0.02%, 20=0.16%, 50=0.49%, 100=12.61%, 250=86.44% 00:15:57.272 lat (msec) : 500=0.28% 00:15:57.272 cpu : usr=0.76%, sys=1.37%, ctx=5648, majf=0, minf=1 00:15:57.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:15:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:57.272 issued rwts: total=0,4932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:57.272 job4: (groupid=0, jobs=1): err= 0: pid=86915: Sat Jul 13 06:02:48 2024 00:15:57.272 write: IOPS=498, BW=125MiB/s (131MB/s)(1261MiB/10109msec); 0 zone resets 00:15:57.272 slat (usec): min=20, max=99284, avg=1952.68, stdev=3634.20 00:15:57.272 clat (msec): min=73, max=228, avg=126.30, stdev=13.52 00:15:57.272 lat (msec): min=73, max=228, avg=128.25, stdev=13.30 00:15:57.272 clat percentiles (msec): 00:15:57.272 | 1.00th=[ 99], 5.00th=[ 115], 10.00th=[ 116], 20.00th=[ 120], 00:15:57.272 | 30.00th=[ 123], 40.00th=[ 124], 50.00th=[ 124], 60.00th=[ 125], 00:15:57.272 | 70.00th=[ 126], 80.00th=[ 128], 90.00th=[ 148], 95.00th=[ 159], 00:15:57.272 | 99.00th=[ 165], 99.50th=[ 184], 99.90th=[ 222], 99.95th=[ 222], 00:15:57.272 | 99.99th=[ 230] 00:15:57.272 bw ( KiB/s): min=104448, max=140288, per=9.16%, avg=127461.60, stdev=10229.14, samples=20 00:15:57.272 iops : min= 408, max= 548, avg=497.80, stdev=39.92, samples=20 00:15:57.272 lat (msec) : 100=1.09%, 250=98.91% 00:15:57.272 cpu : usr=1.06%, sys=1.27%, ctx=5433, majf=0, minf=1 00:15:57.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:57.272 issued rwts: total=0,5043,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:57.272 job5: (groupid=0, jobs=1): err= 0: pid=86916: Sat Jul 13 06:02:48 2024 00:15:57.272 write: IOPS=542, BW=136MiB/s (142MB/s)(1376MiB/10141msec); 0 zone resets 00:15:57.272 slat (usec): min=16, max=13366, avg=1774.40, stdev=3250.00 00:15:57.272 clat (msec): min=6, max=292, avg=116.10, stdev=35.62 00:15:57.272 lat (msec): min=6, max=292, avg=117.88, stdev=36.02 00:15:57.272 clat percentiles (msec): 00:15:57.272 | 1.00th=[ 39], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 89], 00:15:57.272 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 128], 00:15:57.272 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 159], 95.00th=[ 161], 00:15:57.272 | 99.00th=[ 165], 99.50th=[ 224], 99.90th=[ 284], 99.95th=[ 284], 00:15:57.272 | 99.99th=[ 292] 00:15:57.272 bw ( KiB/s): min=100864, max=182272, per=10.01%, avg=139264.00, stdev=37673.27, samples=20 00:15:57.272 iops : min= 394, max= 712, avg=544.00, stdev=147.16, samples=20 00:15:57.272 lat (msec) : 10=0.04%, 20=0.35%, 50=1.02%, 100=57.75%, 250=40.52% 00:15:57.272 lat (msec) : 500=0.33% 00:15:57.272 cpu : usr=0.91%, sys=1.41%, ctx=6887, majf=0, minf=1 00:15:57.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:57.272 issued rwts: total=0,5503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:57.272 job6: (groupid=0, jobs=1): err= 0: pid=86917: Sat Jul 13 06:02:48 2024 00:15:57.272 write: IOPS=448, BW=112MiB/s (118MB/s)(1138MiB/10142msec); 0 zone resets 00:15:57.272 slat (usec): min=19, max=33873, avg=2190.67, stdev=3796.61 00:15:57.272 clat (msec): min=16, max=291, avg=140.26, stdev=20.62 00:15:57.272 lat (msec): min=16, max=291, avg=142.45, stdev=20.56 00:15:57.272 clat percentiles (msec): 00:15:57.272 | 1.00th=[ 111], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 126], 00:15:57.272 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 150], 00:15:57.272 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 161], 00:15:57.272 | 99.00th=[ 184], 99.50th=[ 234], 99.90th=[ 284], 99.95th=[ 284], 00:15:57.272 | 99.99th=[ 292] 00:15:57.272 bw ( KiB/s): min=100352, max=131072, per=8.26%, avg=114944.00, stdev=13052.96, samples=20 00:15:57.272 iops : min= 392, max= 512, avg=449.00, stdev=50.99, samples=20 00:15:57.272 lat (msec) : 20=0.09%, 50=0.35%, 100=0.44%, 250=98.73%, 500=0.40% 00:15:57.272 cpu : usr=1.02%, sys=1.32%, ctx=6730, majf=0, minf=1 00:15:57.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:57.272 issued rwts: total=0,4553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:57.272 job7: (groupid=0, jobs=1): err= 0: pid=86918: Sat Jul 13 06:02:48 2024 00:15:57.272 write: IOPS=462, BW=116MiB/s (121MB/s)(1174MiB/10142msec); 0 zone resets 00:15:57.272 slat (usec): min=18, max=31757, avg=2096.01, stdev=3713.16 00:15:57.272 clat (msec): min=18, max=288, avg=136.10, stdev=24.15 00:15:57.272 lat (msec): min=20, max=288, avg=138.20, stdev=24.28 00:15:57.272 clat percentiles (msec): 00:15:57.272 | 1.00th=[ 44], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 122], 00:15:57.272 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 146], 00:15:57.272 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 157], 95.00th=[ 165], 00:15:57.272 | 99.00th=[ 178], 99.50th=[ 232], 99.90th=[ 279], 99.95th=[ 279], 00:15:57.272 | 99.99th=[ 288] 00:15:57.272 bw ( KiB/s): min=98304, max=136192, per=8.52%, avg=118568.55, stdev=14240.69, samples=20 00:15:57.272 iops : min= 384, max= 532, avg=463.15, stdev=55.64, samples=20 00:15:57.272 lat (msec) : 20=0.02%, 50=1.21%, 100=1.73%, 250=96.66%, 500=0.38% 00:15:57.272 cpu : usr=0.84%, sys=1.31%, ctx=6313, majf=0, minf=1 00:15:57.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:57.272 issued rwts: total=0,4695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:57.272 job8: (groupid=0, jobs=1): err= 0: pid=86919: Sat Jul 13 06:02:48 2024 00:15:57.272 write: IOPS=501, BW=125MiB/s (132MB/s)(1268MiB/10112msec); 0 zone resets 00:15:57.272 slat (usec): min=18, max=72552, avg=1966.07, stdev=3513.57 00:15:57.272 clat (msec): min=13, max=229, avg=125.52, stdev=16.08 00:15:57.272 lat (msec): min=14, max=230, avg=127.49, stdev=15.95 00:15:57.272 clat percentiles (msec): 00:15:57.272 | 1.00th=[ 54], 5.00th=[ 115], 10.00th=[ 116], 20.00th=[ 120], 00:15:57.272 | 30.00th=[ 123], 40.00th=[ 124], 50.00th=[ 124], 60.00th=[ 125], 00:15:57.272 | 70.00th=[ 126], 80.00th=[ 127], 90.00th=[ 148], 95.00th=[ 159], 00:15:57.272 | 99.00th=[ 163], 99.50th=[ 184], 99.90th=[ 224], 99.95th=[ 224], 00:15:57.272 | 99.99th=[ 230] 00:15:57.272 bw ( KiB/s): min=102400, max=133632, per=9.22%, avg=128256.00, stdev=9065.82, samples=20 00:15:57.272 iops : min= 400, max= 522, avg=501.00, stdev=35.41, samples=20 00:15:57.272 lat (msec) : 20=0.16%, 50=0.71%, 100=0.37%, 250=98.76% 00:15:57.272 cpu : usr=0.82%, sys=1.35%, ctx=5498, majf=0, minf=1 00:15:57.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:57.272 issued rwts: total=0,5073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:57.272 job9: (groupid=0, jobs=1): err= 0: pid=86920: Sat Jul 13 06:02:48 2024 00:15:57.272 write: IOPS=497, BW=124MiB/s (130MB/s)(1258MiB/10113msec); 0 zone resets 00:15:57.272 slat (usec): min=19, max=61908, avg=1981.93, stdev=3542.07 00:15:57.272 clat (msec): min=64, max=230, avg=126.52, stdev=13.44 00:15:57.272 lat (msec): min=64, max=230, avg=128.51, stdev=13.20 00:15:57.272 clat percentiles (msec): 00:15:57.272 | 1.00th=[ 110], 5.00th=[ 115], 10.00th=[ 116], 20.00th=[ 120], 00:15:57.272 | 30.00th=[ 123], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 125], 00:15:57.272 | 70.00th=[ 126], 80.00th=[ 128], 90.00th=[ 150], 95.00th=[ 159], 00:15:57.272 | 99.00th=[ 163], 99.50th=[ 186], 99.90th=[ 224], 99.95th=[ 224], 00:15:57.272 | 99.99th=[ 230] 00:15:57.273 bw ( KiB/s): min=102400, max=133120, per=9.15%, avg=127243.45, stdev=9588.20, samples=20 00:15:57.273 iops : min= 400, max= 520, avg=497.00, stdev=37.51, samples=20 00:15:57.273 lat (msec) : 100=0.79%, 250=99.21% 00:15:57.273 cpu : usr=0.79%, sys=1.30%, ctx=6117, majf=0, minf=1 00:15:57.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:15:57.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:57.273 issued rwts: total=0,5033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.273 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:57.273 job10: (groupid=0, jobs=1): err= 0: pid=86921: Sat Jul 13 06:02:48 2024 00:15:57.273 write: IOPS=446, BW=112MiB/s (117MB/s)(1131MiB/10137msec); 0 zone resets 00:15:57.273 slat (usec): min=20, max=52741, avg=2205.00, stdev=3885.87 00:15:57.273 clat (msec): min=55, max=287, avg=141.11, stdev=19.73 00:15:57.273 lat (msec): min=55, max=287, avg=143.31, stdev=19.63 00:15:57.273 clat percentiles (msec): 00:15:57.273 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 126], 00:15:57.273 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 150], 00:15:57.273 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 163], 00:15:57.273 | 99.00th=[ 188], 99.50th=[ 228], 99.90th=[ 279], 99.95th=[ 279], 00:15:57.273 | 99.99th=[ 288] 00:15:57.273 bw ( KiB/s): min=86016, max=129536, per=8.21%, avg=114203.85, stdev=14198.99, samples=20 00:15:57.273 iops : min= 336, max= 506, avg=446.05, stdev=55.42, samples=20 00:15:57.273 lat (msec) : 100=0.44%, 250=99.25%, 500=0.31% 00:15:57.273 cpu : usr=0.90%, sys=1.37%, ctx=4230, majf=0, minf=1 00:15:57.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:57.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:57.273 issued rwts: total=0,4525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.273 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:57.273 00:15:57.273 Run status group 0 (all jobs): 00:15:57.273 WRITE: bw=1358MiB/s (1424MB/s), 112MiB/s-154MiB/s (117MB/s-161MB/s), io=13.5GiB (14.5GB), run=10109-10146msec 00:15:57.273 00:15:57.273 Disk stats (read/write): 00:15:57.273 nvme0n1: ios=49/8999, merge=0/0, ticks=72/1213661, in_queue=1213733, util=98.03% 00:15:57.273 nvme10n1: ios=49/9793, merge=0/0, ticks=50/1214193, in_queue=1214243, util=98.03% 00:15:57.273 nvme1n1: ios=39/12355, merge=0/0, ticks=32/1212138, in_queue=1212170, util=98.00% 00:15:57.273 nvme2n1: ios=13/9729, merge=0/0, ticks=13/1212264, in_queue=1212277, util=97.93% 00:15:57.273 nvme3n1: ios=15/9948, merge=0/0, ticks=10/1214885, in_queue=1214895, util=97.98% 00:15:57.273 nvme4n1: ios=0/10874, merge=0/0, ticks=0/1212487, in_queue=1212487, util=98.22% 00:15:57.273 nvme5n1: ios=0/8972, merge=0/0, ticks=0/1212453, in_queue=1212453, util=98.30% 00:15:57.273 nvme6n1: ios=0/9257, merge=0/0, ticks=0/1214022, in_queue=1214022, util=98.44% 00:15:57.273 nvme7n1: ios=0/10009, merge=0/0, ticks=0/1213716, in_queue=1213716, util=98.64% 00:15:57.273 nvme8n1: ios=0/9928, merge=0/0, ticks=0/1213596, in_queue=1213596, util=98.74% 00:15:57.273 nvme9n1: ios=0/8911, merge=0/0, ticks=0/1211552, in_queue=1211552, util=98.80% 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:57.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:15:57.273 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:15:57.273 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:15:57.273 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:15:57.273 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:15:57.273 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:15:57.273 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:15:57.273 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:15:57.274 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:15:57.274 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:15:57.274 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:15:57.274 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:57.274 06:02:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:57.274 rmmod nvme_tcp 00:15:57.274 rmmod nvme_fabrics 00:15:57.533 rmmod nvme_keyring 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 86229 ']' 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 86229 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 86229 ']' 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 86229 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86229 00:15:57.533 killing process with pid 86229 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86229' 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 86229 00:15:57.533 06:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 86229 00:15:57.792 06:02:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:57.792 06:02:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:57.792 06:02:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:57.792 06:02:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:57.792 06:02:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:57.792 06:02:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.792 06:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.792 06:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.792 06:02:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:57.792 00:15:57.792 real 0m48.634s 00:15:57.792 user 2m38.565s 00:15:57.792 sys 0m35.655s 00:15:57.792 ************************************ 00:15:57.792 06:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.792 06:02:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.792 END TEST nvmf_multiconnection 00:15:57.792 ************************************ 00:15:57.792 06:02:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:57.792 06:02:49 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:15:57.792 06:02:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:57.792 06:02:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.792 06:02:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:57.792 ************************************ 00:15:57.792 START TEST nvmf_initiator_timeout 00:15:57.792 ************************************ 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:15:57.792 * Looking for test storage... 00:15:57.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.792 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.793 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.793 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:15:57.793 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.793 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:15:57.793 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:58.051 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:58.052 Cannot find device "nvmf_tgt_br" 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.052 Cannot find device "nvmf_tgt_br2" 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:58.052 Cannot find device "nvmf_tgt_br" 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:58.052 Cannot find device "nvmf_tgt_br2" 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:58.052 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:58.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:15:58.311 00:15:58.311 --- 10.0.0.2 ping statistics --- 00:15:58.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.311 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:58.311 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:58.311 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:58.311 00:15:58.311 --- 10.0.0.3 ping statistics --- 00:15:58.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.311 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:58.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:58.311 00:15:58.311 --- 10.0.0.1 ping statistics --- 00:15:58.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.311 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=87282 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 87282 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 87282 ']' 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.311 06:02:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:58.311 [2024-07-13 06:02:49.920989] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:58.311 [2024-07-13 06:02:49.921077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.570 [2024-07-13 06:02:50.059220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.570 [2024-07-13 06:02:50.097412] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.570 [2024-07-13 06:02:50.097488] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.570 [2024-07-13 06:02:50.097499] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.570 [2024-07-13 06:02:50.097507] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.570 [2024-07-13 06:02:50.097513] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.570 [2024-07-13 06:02:50.097654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.570 [2024-07-13 06:02:50.097696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.570 [2024-07-13 06:02:50.097831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.570 [2024-07-13 06:02:50.098316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.570 [2024-07-13 06:02:50.129030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:59.504 Malloc0 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:59.504 Delay0 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:59.504 [2024-07-13 06:02:50.971765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.504 06:02:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:59.504 [2024-07-13 06:02:50.999909] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.504 06:02:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.504 06:02:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:59.504 06:02:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:15:59.504 06:02:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:15:59.504 06:02:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.504 06:02:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:59.504 06:02:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:16:02.035 06:02:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:02.035 06:02:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:02.035 06:02:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:02.035 06:02:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:02.035 06:02:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.035 06:02:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:16:02.035 06:02:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=87346 00:16:02.035 06:02:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:16:02.035 06:02:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:16:02.035 [global] 00:16:02.035 thread=1 00:16:02.035 invalidate=1 00:16:02.035 rw=write 00:16:02.035 time_based=1 00:16:02.035 runtime=60 00:16:02.035 ioengine=libaio 00:16:02.035 direct=1 00:16:02.035 bs=4096 00:16:02.035 iodepth=1 00:16:02.035 norandommap=0 00:16:02.035 numjobs=1 00:16:02.035 00:16:02.035 verify_dump=1 00:16:02.035 verify_backlog=512 00:16:02.035 verify_state_save=0 00:16:02.035 do_verify=1 00:16:02.035 verify=crc32c-intel 00:16:02.035 [job0] 00:16:02.035 filename=/dev/nvme0n1 00:16:02.035 Could not set queue depth (nvme0n1) 00:16:02.035 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.035 fio-3.35 00:16:02.035 Starting 1 thread 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:04.563 true 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:04.563 true 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:04.563 true 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:04.563 true 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.563 06:02:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:07.847 true 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:07.847 true 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:07.847 true 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:07.847 true 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:16:07.847 06:02:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 87346 00:17:04.071 00:17:04.071 job0: (groupid=0, jobs=1): err= 0: pid=87367: Sat Jul 13 06:03:53 2024 00:17:04.071 read: IOPS=776, BW=3106KiB/s (3181kB/s)(182MiB/60000msec) 00:17:04.071 slat (usec): min=10, max=12557, avg=15.54, stdev=66.05 00:17:04.071 clat (usec): min=156, max=40492k, avg=1083.16, stdev=187590.33 00:17:04.071 lat (usec): min=168, max=40492k, avg=1098.70, stdev=187590.32 00:17:04.071 clat percentiles (usec): 00:17:04.071 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:17:04.071 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 217], 00:17:04.071 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 251], 00:17:04.071 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 326], 99.95th=[ 375], 00:17:04.071 | 99.99th=[ 734] 00:17:04.071 write: IOPS=784, BW=3138KiB/s (3213kB/s)(184MiB/60000msec); 0 zone resets 00:17:04.071 slat (usec): min=12, max=538, avg=22.17, stdev= 6.49 00:17:04.071 clat (usec): min=16, max=2661, avg=161.26, stdev=24.34 00:17:04.071 lat (usec): min=136, max=2691, avg=183.43, stdev=25.56 00:17:04.071 clat percentiles (usec): 00:17:04.071 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 145], 00:17:04.071 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 165], 00:17:04.071 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 196], 00:17:04.071 | 99.00th=[ 219], 99.50th=[ 229], 99.90th=[ 265], 99.95th=[ 285], 00:17:04.071 | 99.99th=[ 627] 00:17:04.071 bw ( KiB/s): min= 4096, max=11720, per=100.00%, avg=9411.28, stdev=1698.62, samples=39 00:17:04.071 iops : min= 1024, max= 2930, avg=2352.82, stdev=424.65, samples=39 00:17:04.071 lat (usec) : 20=0.01%, 250=97.25%, 500=2.72%, 750=0.01%, 1000=0.01% 00:17:04.071 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:17:04.071 cpu : usr=0.65%, sys=2.27%, ctx=93675, majf=0, minf=2 00:17:04.071 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:04.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.071 issued rwts: total=46592,47069,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.071 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:04.071 00:17:04.071 Run status group 0 (all jobs): 00:17:04.071 READ: bw=3106KiB/s (3181kB/s), 3106KiB/s-3106KiB/s (3181kB/s-3181kB/s), io=182MiB (191MB), run=60000-60000msec 00:17:04.071 WRITE: bw=3138KiB/s (3213kB/s), 3138KiB/s-3138KiB/s (3213kB/s-3213kB/s), io=184MiB (193MB), run=60000-60000msec 00:17:04.071 00:17:04.071 Disk stats (read/write): 00:17:04.071 nvme0n1: ios=46786/46592, merge=0/0, ticks=10362/8116, in_queue=18478, util=99.73% 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:04.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:17:04.071 nvmf hotplug test: fio successful as expected 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:04.071 rmmod nvme_tcp 00:17:04.071 rmmod nvme_fabrics 00:17:04.071 rmmod nvme_keyring 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 87282 ']' 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 87282 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 87282 ']' 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 87282 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87282 00:17:04.071 killing process with pid 87282 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87282' 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 87282 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 87282 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:04.071 00:17:04.071 real 1m4.391s 00:17:04.071 user 3m52.125s 00:17:04.071 sys 0m22.570s 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:04.071 06:03:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:04.071 ************************************ 00:17:04.071 END TEST nvmf_initiator_timeout 00:17:04.071 ************************************ 00:17:04.071 06:03:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:04.071 06:03:53 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:17:04.071 06:03:53 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:17:04.071 06:03:53 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:04.071 06:03:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:04.071 06:03:53 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:17:04.071 06:03:53 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:04.071 06:03:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:04.071 06:03:53 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:17:04.071 06:03:53 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:04.071 06:03:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:04.071 06:03:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.071 06:03:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:04.071 ************************************ 00:17:04.071 START TEST nvmf_identify 00:17:04.071 ************************************ 00:17:04.071 06:03:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:04.071 * Looking for test storage... 00:17:04.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:04.071 06:03:53 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:04.071 06:03:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.071 06:03:54 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:04.072 Cannot find device "nvmf_tgt_br" 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:04.072 Cannot find device "nvmf_tgt_br2" 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:04.072 Cannot find device "nvmf_tgt_br" 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:04.072 Cannot find device "nvmf_tgt_br2" 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:04.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:04.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:04.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:17:04.072 00:17:04.072 --- 10.0.0.2 ping statistics --- 00:17:04.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.072 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:04.072 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:04.072 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:17:04.072 00:17:04.072 --- 10.0.0.3 ping statistics --- 00:17:04.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.072 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:04.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:04.072 00:17:04.072 --- 10.0.0.1 ping statistics --- 00:17:04.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.072 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=88202 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 88202 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 88202 ']' 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.072 [2024-07-13 06:03:54.423295] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:04.072 [2024-07-13 06:03:54.423404] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.072 [2024-07-13 06:03:54.560030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.072 [2024-07-13 06:03:54.603901] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.072 [2024-07-13 06:03:54.603987] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.072 [2024-07-13 06:03:54.604001] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.072 [2024-07-13 06:03:54.604013] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.072 [2024-07-13 06:03:54.604022] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.072 [2024-07-13 06:03:54.604193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.072 [2024-07-13 06:03:54.604345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.072 [2024-07-13 06:03:54.604945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.072 [2024-07-13 06:03:54.604979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.072 [2024-07-13 06:03:54.638149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.072 [2024-07-13 06:03:54.688089] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.072 Malloc0 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.072 [2024-07-13 06:03:54.787335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.072 [ 00:17:04.072 { 00:17:04.072 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:04.072 "subtype": "Discovery", 00:17:04.072 "listen_addresses": [ 00:17:04.072 { 00:17:04.072 "trtype": "TCP", 00:17:04.072 "adrfam": "IPv4", 00:17:04.072 "traddr": "10.0.0.2", 00:17:04.072 "trsvcid": "4420" 00:17:04.072 } 00:17:04.072 ], 00:17:04.072 "allow_any_host": true, 00:17:04.072 "hosts": [] 00:17:04.072 }, 00:17:04.072 { 00:17:04.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.072 "subtype": "NVMe", 00:17:04.072 "listen_addresses": [ 00:17:04.072 { 00:17:04.072 "trtype": "TCP", 00:17:04.072 "adrfam": "IPv4", 00:17:04.072 "traddr": "10.0.0.2", 00:17:04.072 "trsvcid": "4420" 00:17:04.072 } 00:17:04.072 ], 00:17:04.072 "allow_any_host": true, 00:17:04.072 "hosts": [], 00:17:04.072 "serial_number": "SPDK00000000000001", 00:17:04.072 "model_number": "SPDK bdev Controller", 00:17:04.072 "max_namespaces": 32, 00:17:04.072 "min_cntlid": 1, 00:17:04.072 "max_cntlid": 65519, 00:17:04.072 "namespaces": [ 00:17:04.072 { 00:17:04.072 "nsid": 1, 00:17:04.072 "bdev_name": "Malloc0", 00:17:04.072 "name": "Malloc0", 00:17:04.072 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:04.072 "eui64": "ABCDEF0123456789", 00:17:04.072 "uuid": "3c1a25a7-a2dc-4c60-a622-d0f019bd6ead" 00:17:04.072 } 00:17:04.072 ] 00:17:04.072 } 00:17:04.072 ] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.072 06:03:54 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:04.072 [2024-07-13 06:03:54.838465] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:04.073 [2024-07-13 06:03:54.838518] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88231 ] 00:17:04.073 [2024-07-13 06:03:54.978924] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:04.073 [2024-07-13 06:03:54.978985] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:04.073 [2024-07-13 06:03:54.978993] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:04.073 [2024-07-13 06:03:54.979005] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:04.073 [2024-07-13 06:03:54.979012] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:04.073 [2024-07-13 06:03:54.979137] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:04.073 [2024-07-13 06:03:54.979190] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a6ae60 0 00:17:04.073 [2024-07-13 06:03:54.983392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:04.073 [2024-07-13 06:03:54.983415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:04.073 [2024-07-13 06:03:54.983421] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:04.073 [2024-07-13 06:03:54.983425] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:04.073 [2024-07-13 06:03:54.983470] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.983479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.983483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.983498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:04.073 [2024-07-13 06:03:54.983533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3700, cid 0, qid 0 00:17:04.073 [2024-07-13 06:03:54.991416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.073 [2024-07-13 06:03:54.991439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.073 [2024-07-13 06:03:54.991445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.991452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3700) on tqpair=0x1a6ae60 00:17:04.073 [2024-07-13 06:03:54.991464] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:04.073 [2024-07-13 06:03:54.991473] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:04.073 [2024-07-13 06:03:54.991480] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:04.073 [2024-07-13 06:03:54.991500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.991506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.991510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.991520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.073 [2024-07-13 06:03:54.991551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3700, cid 0, qid 0 00:17:04.073 [2024-07-13 06:03:54.991616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.073 [2024-07-13 06:03:54.991624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.073 [2024-07-13 06:03:54.991628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.991633] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3700) on tqpair=0x1a6ae60 00:17:04.073 [2024-07-13 06:03:54.991639] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:04.073 [2024-07-13 06:03:54.991647] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:04.073 [2024-07-13 06:03:54.991656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.991661] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.991665] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.991673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.073 [2024-07-13 06:03:54.991695] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3700, cid 0, qid 0 00:17:04.073 [2024-07-13 06:03:54.991746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.073 [2024-07-13 06:03:54.991753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.073 [2024-07-13 06:03:54.991757] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.991762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3700) on tqpair=0x1a6ae60 00:17:04.073 [2024-07-13 06:03:54.991768] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:04.073 [2024-07-13 06:03:54.991778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:04.073 [2024-07-13 06:03:54.991786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.991791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.991795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.991803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.073 [2024-07-13 06:03:54.991824] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3700, cid 0, qid 0 00:17:04.073 [2024-07-13 06:03:54.991868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.073 [2024-07-13 06:03:54.991876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.073 [2024-07-13 06:03:54.991880] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.991885] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3700) on tqpair=0x1a6ae60 00:17:04.073 [2024-07-13 06:03:54.991895] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:04.073 [2024-07-13 06:03:54.991913] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.991919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.991923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.991931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.073 [2024-07-13 06:03:54.991953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3700, cid 0, qid 0 00:17:04.073 [2024-07-13 06:03:54.992004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.073 [2024-07-13 06:03:54.992011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.073 [2024-07-13 06:03:54.992015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3700) on tqpair=0x1a6ae60 00:17:04.073 [2024-07-13 06:03:54.992025] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:04.073 [2024-07-13 06:03:54.992031] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:04.073 [2024-07-13 06:03:54.992039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:04.073 [2024-07-13 06:03:54.992145] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:04.073 [2024-07-13 06:03:54.992152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:04.073 [2024-07-13 06:03:54.992162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.992179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.073 [2024-07-13 06:03:54.992200] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3700, cid 0, qid 0 00:17:04.073 [2024-07-13 06:03:54.992248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.073 [2024-07-13 06:03:54.992255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.073 [2024-07-13 06:03:54.992260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3700) on tqpair=0x1a6ae60 00:17:04.073 [2024-07-13 06:03:54.992270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:04.073 [2024-07-13 06:03:54.992281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.992298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.073 [2024-07-13 06:03:54.992318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3700, cid 0, qid 0 00:17:04.073 [2024-07-13 06:03:54.992362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.073 [2024-07-13 06:03:54.992384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.073 [2024-07-13 06:03:54.992389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3700) on tqpair=0x1a6ae60 00:17:04.073 [2024-07-13 06:03:54.992399] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:04.073 [2024-07-13 06:03:54.992405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:04.073 [2024-07-13 06:03:54.992414] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:04.073 [2024-07-13 06:03:54.992425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:04.073 [2024-07-13 06:03:54.992436] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.992450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.073 [2024-07-13 06:03:54.992472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3700, cid 0, qid 0 00:17:04.073 [2024-07-13 06:03:54.992556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.073 [2024-07-13 06:03:54.992563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.073 [2024-07-13 06:03:54.992567] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992572] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6ae60): datao=0, datal=4096, cccid=0 00:17:04.073 [2024-07-13 06:03:54.992577] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa3700) on tqpair(0x1a6ae60): expected_datao=0, payload_size=4096 00:17:04.073 [2024-07-13 06:03:54.992583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992592] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992598] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.073 [2024-07-13 06:03:54.992613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.073 [2024-07-13 06:03:54.992617] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3700) on tqpair=0x1a6ae60 00:17:04.073 [2024-07-13 06:03:54.992630] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:04.073 [2024-07-13 06:03:54.992636] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:04.073 [2024-07-13 06:03:54.992641] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:04.073 [2024-07-13 06:03:54.992647] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:04.073 [2024-07-13 06:03:54.992652] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:04.073 [2024-07-13 06:03:54.992658] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:04.073 [2024-07-13 06:03:54.992668] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:04.073 [2024-07-13 06:03:54.992676] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.992693] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.073 [2024-07-13 06:03:54.992714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3700, cid 0, qid 0 00:17:04.073 [2024-07-13 06:03:54.992769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.073 [2024-07-13 06:03:54.992776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.073 [2024-07-13 06:03:54.992780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3700) on tqpair=0x1a6ae60 00:17:04.073 [2024-07-13 06:03:54.992793] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992802] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.992809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.073 [2024-07-13 06:03:54.992816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.992831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.073 [2024-07-13 06:03:54.992838] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.992852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.073 [2024-07-13 06:03:54.992859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.992873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.073 [2024-07-13 06:03:54.992879] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:04.073 [2024-07-13 06:03:54.992893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:04.073 [2024-07-13 06:03:54.992901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.992906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.992913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.073 [2024-07-13 06:03:54.992935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3700, cid 0, qid 0 00:17:04.073 [2024-07-13 06:03:54.992942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3880, cid 1, qid 0 00:17:04.073 [2024-07-13 06:03:54.992948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3a00, cid 2, qid 0 00:17:04.073 [2024-07-13 06:03:54.992953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3b80, cid 3, qid 0 00:17:04.073 [2024-07-13 06:03:54.992958] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3d00, cid 4, qid 0 00:17:04.073 [2024-07-13 06:03:54.993042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.073 [2024-07-13 06:03:54.993049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.073 [2024-07-13 06:03:54.993053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.993058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3d00) on tqpair=0x1a6ae60 00:17:04.073 [2024-07-13 06:03:54.993063] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:04.073 [2024-07-13 06:03:54.993073] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:04.073 [2024-07-13 06:03:54.993086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.993091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6ae60) 00:17:04.073 [2024-07-13 06:03:54.993099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.073 [2024-07-13 06:03:54.993119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3d00, cid 4, qid 0 00:17:04.073 [2024-07-13 06:03:54.993185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.073 [2024-07-13 06:03:54.993194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.073 [2024-07-13 06:03:54.993198] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.993202] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6ae60): datao=0, datal=4096, cccid=4 00:17:04.073 [2024-07-13 06:03:54.993207] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa3d00) on tqpair(0x1a6ae60): expected_datao=0, payload_size=4096 00:17:04.073 [2024-07-13 06:03:54.993212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.993220] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.993225] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.073 [2024-07-13 06:03:54.993234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.073 [2024-07-13 06:03:54.993241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.074 [2024-07-13 06:03:54.993245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3d00) on tqpair=0x1a6ae60 00:17:04.074 [2024-07-13 06:03:54.993263] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:04.074 [2024-07-13 06:03:54.993292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6ae60) 00:17:04.074 [2024-07-13 06:03:54.993307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.074 [2024-07-13 06:03:54.993315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a6ae60) 00:17:04.074 [2024-07-13 06:03:54.993331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.074 [2024-07-13 06:03:54.993357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3d00, cid 4, qid 0 00:17:04.074 [2024-07-13 06:03:54.993365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3e80, cid 5, qid 0 00:17:04.074 [2024-07-13 06:03:54.993483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.074 [2024-07-13 06:03:54.993491] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.074 [2024-07-13 06:03:54.993495] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993500] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6ae60): datao=0, datal=1024, cccid=4 00:17:04.074 [2024-07-13 06:03:54.993505] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa3d00) on tqpair(0x1a6ae60): expected_datao=0, payload_size=1024 00:17:04.074 [2024-07-13 06:03:54.993510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993517] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993521] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.074 [2024-07-13 06:03:54.993534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.074 [2024-07-13 06:03:54.993538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3e80) on tqpair=0x1a6ae60 00:17:04.074 [2024-07-13 06:03:54.993563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.074 [2024-07-13 06:03:54.993571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.074 [2024-07-13 06:03:54.993575] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3d00) on tqpair=0x1a6ae60 00:17:04.074 [2024-07-13 06:03:54.993593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6ae60) 00:17:04.074 [2024-07-13 06:03:54.993606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.074 [2024-07-13 06:03:54.993633] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3d00, cid 4, qid 0 00:17:04.074 [2024-07-13 06:03:54.993701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.074 [2024-07-13 06:03:54.993708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.074 [2024-07-13 06:03:54.993712] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993716] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6ae60): datao=0, datal=3072, cccid=4 00:17:04.074 [2024-07-13 06:03:54.993721] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa3d00) on tqpair(0x1a6ae60): expected_datao=0, payload_size=3072 00:17:04.074 [2024-07-13 06:03:54.993726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993734] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993738] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.074 [2024-07-13 06:03:54.993754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.074 [2024-07-13 06:03:54.993758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3d00) on tqpair=0x1a6ae60 00:17:04.074 [2024-07-13 06:03:54.993772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993777] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a6ae60) 00:17:04.074 ===================================================== 00:17:04.074 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:04.074 ===================================================== 00:17:04.074 Controller Capabilities/Features 00:17:04.074 ================================ 00:17:04.074 Vendor ID: 0000 00:17:04.074 Subsystem Vendor ID: 0000 00:17:04.074 Serial Number: .................... 00:17:04.074 Model Number: ........................................ 00:17:04.074 Firmware Version: 24.09 00:17:04.074 Recommended Arb Burst: 0 00:17:04.074 IEEE OUI Identifier: 00 00 00 00:17:04.074 Multi-path I/O 00:17:04.074 May have multiple subsystem ports: No 00:17:04.074 May have multiple controllers: No 00:17:04.074 Associated with SR-IOV VF: No 00:17:04.074 Max Data Transfer Size: 131072 00:17:04.074 Max Number of Namespaces: 0 00:17:04.074 Max Number of I/O Queues: 1024 00:17:04.074 NVMe Specification Version (VS): 1.3 00:17:04.074 NVMe Specification Version (Identify): 1.3 00:17:04.074 Maximum Queue Entries: 128 00:17:04.074 Contiguous Queues Required: Yes 00:17:04.074 Arbitration Mechanisms Supported 00:17:04.074 Weighted Round Robin: Not Supported 00:17:04.074 Vendor Specific: Not Supported 00:17:04.074 Reset Timeout: 15000 ms 00:17:04.074 Doorbell Stride: 4 bytes 00:17:04.074 NVM Subsystem Reset: Not Supported 00:17:04.074 Command Sets Supported 00:17:04.074 NVM Command Set: Supported 00:17:04.074 Boot Partition: Not Supported 00:17:04.074 Memory Page Size Minimum: 4096 bytes 00:17:04.074 Memory Page Size Maximum: 4096 bytes 00:17:04.074 Persistent Memory Region: Not Supported 00:17:04.074 Optional Asynchronous Events Supported 00:17:04.074 Namespace Attribute Notices: Not Supported 00:17:04.074 Firmware Activation Notices: Not Supported 00:17:04.074 ANA Change Notices: Not Supported 00:17:04.074 PLE Aggregate Log Change Notices: Not Supported 00:17:04.074 LBA Status Info Alert Notices: Not Supported 00:17:04.074 EGE Aggregate Log Change Notices: Not Supported 00:17:04.074 Normal NVM Subsystem Shutdown event: Not Supported 00:17:04.074 Zone Descriptor Change Notices: Not Supported 00:17:04.074 Discovery Log Change Notices: Supported 00:17:04.074 Controller Attributes 00:17:04.074 128-bit Host Identifier: Not Supported 00:17:04.074 Non-Operational Permissive Mode: Not Supported 00:17:04.074 NVM Sets: Not Supported 00:17:04.074 Read Recovery Levels: Not Supported 00:17:04.074 Endurance Groups: Not Supported 00:17:04.074 Predictable Latency Mode: Not Supported 00:17:04.074 Traffic Based Keep ALive: Not Supported 00:17:04.074 Namespace Granularity: Not Supported 00:17:04.074 SQ Associations: Not Supported 00:17:04.074 UUID List: Not Supported 00:17:04.074 Multi-Domain Subsystem: Not Supported 00:17:04.074 Fixed Capacity Management: Not Supported 00:17:04.074 Variable Capacity Management: Not Supported 00:17:04.074 Delete Endurance Group: Not Supported 00:17:04.074 Delete NVM Set: Not Supported 00:17:04.074 Extended LBA Formats Supported: Not Supported 00:17:04.074 Flexible Data Placement Supported: Not Supported 00:17:04.074 00:17:04.074 Controller Memory Buffer Support 00:17:04.074 ================================ 00:17:04.074 Supported: No 00:17:04.074 00:17:04.074 Persistent Memory Region Support 00:17:04.074 ================================ 00:17:04.074 Supported: No 00:17:04.074 00:17:04.074 Admin Command Set Attributes 00:17:04.074 ============================ 00:17:04.074 Security Send/Receive: Not Supported 00:17:04.074 Format NVM: Not Supported 00:17:04.074 Firmware Activate/Download: Not Supported 00:17:04.074 Namespace Management: Not Supported 00:17:04.074 Device Self-Test: Not Supported 00:17:04.074 Directives: Not Supported 00:17:04.074 NVMe-MI: Not Supported 00:17:04.074 Virtualization Management: Not Supported 00:17:04.074 Doorbell Buffer Config: Not Supported 00:17:04.074 Get LBA Status Capability: Not Supported 00:17:04.074 Command & Feature Lockdown Capability: Not Supported 00:17:04.074 Abort Command Limit: 1 00:17:04.074 Async Event Request Limit: 4 00:17:04.074 Number of Firmware Slots: N/A 00:17:04.074 Firmware Slot 1 Read-Only: N/A 00:17:04.074 Firm[2024-07-13 06:03:54.993785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.074 [2024-07-13 06:03:54.993810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3d00, cid 4, qid 0 00:17:04.074 [2024-07-13 06:03:54.993876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.074 [2024-07-13 06:03:54.993883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.074 [2024-07-13 06:03:54.993887] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993892] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a6ae60): datao=0, datal=8, cccid=4 00:17:04.074 [2024-07-13 06:03:54.993897] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa3d00) on tqpair(0x1a6ae60): expected_datao=0, payload_size=8 00:17:04.074 [2024-07-13 06:03:54.993902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993909] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993913] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.074 [2024-07-13 06:03:54.993936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.074 [2024-07-13 06:03:54.993940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.074 [2024-07-13 06:03:54.993945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3d00) on tqpair=0x1a6ae60 00:17:04.074 ware Activation Without Reset: N/A 00:17:04.074 Multiple Update Detection Support: N/A 00:17:04.074 Firmware Update Granularity: No Information Provided 00:17:04.074 Per-Namespace SMART Log: No 00:17:04.074 Asymmetric Namespace Access Log Page: Not Supported 00:17:04.074 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:04.074 Command Effects Log Page: Not Supported 00:17:04.074 Get Log Page Extended Data: Supported 00:17:04.075 Telemetry Log Pages: Not Supported 00:17:04.075 Persistent Event Log Pages: Not Supported 00:17:04.075 Supported Log Pages Log Page: May Support 00:17:04.075 Commands Supported & Effects Log Page: Not Supported 00:17:04.075 Feature Identifiers & Effects Log Page:May Support 00:17:04.075 NVMe-MI Commands & Effects Log Page: May Support 00:17:04.075 Data Area 4 for Telemetry Log: Not Supported 00:17:04.075 Error Log Page Entries Supported: 128 00:17:04.075 Keep Alive: Not Supported 00:17:04.075 00:17:04.075 NVM Command Set Attributes 00:17:04.075 ========================== 00:17:04.075 Submission Queue Entry Size 00:17:04.075 Max: 1 00:17:04.075 Min: 1 00:17:04.075 Completion Queue Entry Size 00:17:04.075 Max: 1 00:17:04.075 Min: 1 00:17:04.075 Number of Namespaces: 0 00:17:04.075 Compare Command: Not Supported 00:17:04.075 Write Uncorrectable Command: Not Supported 00:17:04.075 Dataset Management Command: Not Supported 00:17:04.075 Write Zeroes Command: Not Supported 00:17:04.075 Set Features Save Field: Not Supported 00:17:04.075 Reservations: Not Supported 00:17:04.075 Timestamp: Not Supported 00:17:04.075 Copy: Not Supported 00:17:04.075 Volatile Write Cache: Not Present 00:17:04.075 Atomic Write Unit (Normal): 1 00:17:04.075 Atomic Write Unit (PFail): 1 00:17:04.075 Atomic Compare & Write Unit: 1 00:17:04.075 Fused Compare & Write: Supported 00:17:04.075 Scatter-Gather List 00:17:04.075 SGL Command Set: Supported 00:17:04.075 SGL Keyed: Supported 00:17:04.075 SGL Bit Bucket Descriptor: Not Supported 00:17:04.075 SGL Metadata Pointer: Not Supported 00:17:04.075 Oversized SGL: Not Supported 00:17:04.075 SGL Metadata Address: Not Supported 00:17:04.075 SGL Offset: Supported 00:17:04.075 Transport SGL Data Block: Not Supported 00:17:04.075 Replay Protected Memory Block: Not Supported 00:17:04.075 00:17:04.075 Firmware Slot Information 00:17:04.075 ========================= 00:17:04.075 Active slot: 0 00:17:04.075 00:17:04.075 00:17:04.075 Error Log 00:17:04.075 ========= 00:17:04.075 00:17:04.075 Active Namespaces 00:17:04.075 ================= 00:17:04.075 Discovery Log Page 00:17:04.075 ================== 00:17:04.075 Generation Counter: 2 00:17:04.075 Number of Records: 2 00:17:04.075 Record Format: 0 00:17:04.075 00:17:04.075 Discovery Log Entry 0 00:17:04.075 ---------------------- 00:17:04.075 Transport Type: 3 (TCP) 00:17:04.075 Address Family: 1 (IPv4) 00:17:04.075 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:04.075 Entry Flags: 00:17:04.075 Duplicate Returned Information: 1 00:17:04.075 Explicit Persistent Connection Support for Discovery: 1 00:17:04.075 Transport Requirements: 00:17:04.075 Secure Channel: Not Required 00:17:04.075 Port ID: 0 (0x0000) 00:17:04.075 Controller ID: 65535 (0xffff) 00:17:04.075 Admin Max SQ Size: 128 00:17:04.075 Transport Service Identifier: 4420 00:17:04.075 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:04.075 Transport Address: 10.0.0.2 00:17:04.075 Discovery Log Entry 1 00:17:04.075 ---------------------- 00:17:04.075 Transport Type: 3 (TCP) 00:17:04.075 Address Family: 1 (IPv4) 00:17:04.075 Subsystem Type: 2 (NVM Subsystem) 00:17:04.075 Entry Flags: 00:17:04.075 Duplicate Returned Information: 0 00:17:04.075 Explicit Persistent Connection Support for Discovery: 0 00:17:04.075 Transport Requirements: 00:17:04.075 Secure Channel: Not Required 00:17:04.075 Port ID: 0 (0x0000) 00:17:04.075 Controller ID: 65535 (0xffff) 00:17:04.075 Admin Max SQ Size: 128 00:17:04.075 Transport Service Identifier: 4420 00:17:04.075 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:04.075 Transport Address: 10.0.0.2 [2024-07-13 06:03:54.994087] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:04.075 [2024-07-13 06:03:54.994113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3700) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.994124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.075 [2024-07-13 06:03:54.994131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3880) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.994136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.075 [2024-07-13 06:03:54.994142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3a00) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.994147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.075 [2024-07-13 06:03:54.994153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3b80) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.994158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.075 [2024-07-13 06:03:54.994168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6ae60) 00:17:04.075 [2024-07-13 06:03:54.994187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.075 [2024-07-13 06:03:54.994216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3b80, cid 3, qid 0 00:17:04.075 [2024-07-13 06:03:54.994277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.075 [2024-07-13 06:03:54.994285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.075 [2024-07-13 06:03:54.994289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3b80) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.994303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6ae60) 00:17:04.075 [2024-07-13 06:03:54.994321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.075 [2024-07-13 06:03:54.994345] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3b80, cid 3, qid 0 00:17:04.075 [2024-07-13 06:03:54.994428] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.075 [2024-07-13 06:03:54.994438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.075 [2024-07-13 06:03:54.994442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3b80) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.994452] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:04.075 [2024-07-13 06:03:54.994457] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:04.075 [2024-07-13 06:03:54.994469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6ae60) 00:17:04.075 [2024-07-13 06:03:54.994487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.075 [2024-07-13 06:03:54.994509] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3b80, cid 3, qid 0 00:17:04.075 [2024-07-13 06:03:54.994558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.075 [2024-07-13 06:03:54.994565] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.075 [2024-07-13 06:03:54.994569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3b80) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.994586] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6ae60) 00:17:04.075 [2024-07-13 06:03:54.994603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.075 [2024-07-13 06:03:54.994623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3b80, cid 3, qid 0 00:17:04.075 [2024-07-13 06:03:54.994674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.075 [2024-07-13 06:03:54.994681] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.075 [2024-07-13 06:03:54.994685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994690] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3b80) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.994701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6ae60) 00:17:04.075 [2024-07-13 06:03:54.994718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.075 [2024-07-13 06:03:54.994738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3b80, cid 3, qid 0 00:17:04.075 [2024-07-13 06:03:54.994781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.075 [2024-07-13 06:03:54.994788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.075 [2024-07-13 06:03:54.994792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3b80) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.994808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6ae60) 00:17:04.075 [2024-07-13 06:03:54.994825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.075 [2024-07-13 06:03:54.994844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3b80, cid 3, qid 0 00:17:04.075 [2024-07-13 06:03:54.994901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.075 [2024-07-13 06:03:54.994908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.075 [2024-07-13 06:03:54.994912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3b80) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.994928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.994937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6ae60) 00:17:04.075 [2024-07-13 06:03:54.994945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.075 [2024-07-13 06:03:54.994964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3b80, cid 3, qid 0 00:17:04.075 [2024-07-13 06:03:54.995010] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.075 [2024-07-13 06:03:54.995017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.075 [2024-07-13 06:03:54.995021] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.995026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3b80) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.995037] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.995042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.995046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6ae60) 00:17:04.075 [2024-07-13 06:03:54.995054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.075 [2024-07-13 06:03:54.995073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3b80, cid 3, qid 0 00:17:04.075 [2024-07-13 06:03:54.995121] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.075 [2024-07-13 06:03:54.995128] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.075 [2024-07-13 06:03:54.995132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.995137] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3b80) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.995148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.995153] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.995157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6ae60) 00:17:04.075 [2024-07-13 06:03:54.995165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.075 [2024-07-13 06:03:54.995185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3b80, cid 3, qid 0 00:17:04.075 [2024-07-13 06:03:54.995229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.075 [2024-07-13 06:03:54.995236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.075 [2024-07-13 06:03:54.995240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.995245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3b80) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.995256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.995262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.995266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6ae60) 00:17:04.075 [2024-07-13 06:03:54.995274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.075 [2024-07-13 06:03:54.995293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3b80, cid 3, qid 0 00:17:04.075 [2024-07-13 06:03:54.995343] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.075 [2024-07-13 06:03:54.995351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.075 [2024-07-13 06:03:54.995355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.995359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3b80) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.999388] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.999410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.999416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a6ae60) 00:17:04.075 [2024-07-13 06:03:54.999426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.075 [2024-07-13 06:03:54.999454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa3b80, cid 3, qid 0 00:17:04.075 [2024-07-13 06:03:54.999516] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.075 [2024-07-13 06:03:54.999524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.075 [2024-07-13 06:03:54.999528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.075 [2024-07-13 06:03:54.999533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa3b80) on tqpair=0x1a6ae60 00:17:04.075 [2024-07-13 06:03:54.999543] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:17:04.075 00:17:04.075 06:03:55 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:04.075 [2024-07-13 06:03:55.039691] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:04.075 [2024-07-13 06:03:55.039745] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88233 ] 00:17:04.075 [2024-07-13 06:03:55.176996] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:04.075 [2024-07-13 06:03:55.177057] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:04.075 [2024-07-13 06:03:55.177065] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:04.075 [2024-07-13 06:03:55.177077] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:04.075 [2024-07-13 06:03:55.177083] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:04.075 [2024-07-13 06:03:55.177192] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:04.075 [2024-07-13 06:03:55.177241] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5f2e60 0 00:17:04.075 [2024-07-13 06:03:55.181387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:04.076 [2024-07-13 06:03:55.181412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:04.076 [2024-07-13 06:03:55.181418] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:04.076 [2024-07-13 06:03:55.181422] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:04.076 [2024-07-13 06:03:55.181464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.181472] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.181476] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.181489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:04.076 [2024-07-13 06:03:55.181522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62b700, cid 0, qid 0 00:17:04.076 [2024-07-13 06:03:55.189391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.076 [2024-07-13 06:03:55.189414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.076 [2024-07-13 06:03:55.189420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.189425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62b700) on tqpair=0x5f2e60 00:17:04.076 [2024-07-13 06:03:55.189436] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:04.076 [2024-07-13 06:03:55.189444] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:04.076 [2024-07-13 06:03:55.189451] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:04.076 [2024-07-13 06:03:55.189468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.189474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.189479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.189489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.076 [2024-07-13 06:03:55.189518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62b700, cid 0, qid 0 00:17:04.076 [2024-07-13 06:03:55.189610] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.076 [2024-07-13 06:03:55.189618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.076 [2024-07-13 06:03:55.189622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.189627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62b700) on tqpair=0x5f2e60 00:17:04.076 [2024-07-13 06:03:55.189633] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:04.076 [2024-07-13 06:03:55.189641] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:04.076 [2024-07-13 06:03:55.189649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.189654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.189658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.189666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.076 [2024-07-13 06:03:55.189686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62b700, cid 0, qid 0 00:17:04.076 [2024-07-13 06:03:55.189764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.076 [2024-07-13 06:03:55.189772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.076 [2024-07-13 06:03:55.189776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.189781] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62b700) on tqpair=0x5f2e60 00:17:04.076 [2024-07-13 06:03:55.189787] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:04.076 [2024-07-13 06:03:55.189796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:04.076 [2024-07-13 06:03:55.189804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.189809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.189813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.189821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.076 [2024-07-13 06:03:55.189840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62b700, cid 0, qid 0 00:17:04.076 [2024-07-13 06:03:55.189922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.076 [2024-07-13 06:03:55.189931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.076 [2024-07-13 06:03:55.189935] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.189940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62b700) on tqpair=0x5f2e60 00:17:04.076 [2024-07-13 06:03:55.189946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:04.076 [2024-07-13 06:03:55.189957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.189962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.189967] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.189985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.076 [2024-07-13 06:03:55.190007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62b700, cid 0, qid 0 00:17:04.076 [2024-07-13 06:03:55.190090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.076 [2024-07-13 06:03:55.190098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.076 [2024-07-13 06:03:55.190102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62b700) on tqpair=0x5f2e60 00:17:04.076 [2024-07-13 06:03:55.190112] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:04.076 [2024-07-13 06:03:55.190117] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:04.076 [2024-07-13 06:03:55.190126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:04.076 [2024-07-13 06:03:55.190232] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:04.076 [2024-07-13 06:03:55.190237] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:04.076 [2024-07-13 06:03:55.190246] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.190262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.076 [2024-07-13 06:03:55.190282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62b700, cid 0, qid 0 00:17:04.076 [2024-07-13 06:03:55.190354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.076 [2024-07-13 06:03:55.190362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.076 [2024-07-13 06:03:55.190366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62b700) on tqpair=0x5f2e60 00:17:04.076 [2024-07-13 06:03:55.190390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:04.076 [2024-07-13 06:03:55.190402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190407] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.190419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.076 [2024-07-13 06:03:55.190439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62b700, cid 0, qid 0 00:17:04.076 [2024-07-13 06:03:55.190506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.076 [2024-07-13 06:03:55.190514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.076 [2024-07-13 06:03:55.190518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190522] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62b700) on tqpair=0x5f2e60 00:17:04.076 [2024-07-13 06:03:55.190527] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:04.076 [2024-07-13 06:03:55.190533] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:04.076 [2024-07-13 06:03:55.190542] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:04.076 [2024-07-13 06:03:55.190553] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:04.076 [2024-07-13 06:03:55.190563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.190576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.076 [2024-07-13 06:03:55.190596] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62b700, cid 0, qid 0 00:17:04.076 [2024-07-13 06:03:55.190715] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.076 [2024-07-13 06:03:55.190732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.076 [2024-07-13 06:03:55.190737] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190741] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5f2e60): datao=0, datal=4096, cccid=0 00:17:04.076 [2024-07-13 06:03:55.190747] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62b700) on tqpair(0x5f2e60): expected_datao=0, payload_size=4096 00:17:04.076 [2024-07-13 06:03:55.190752] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190760] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190765] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.076 [2024-07-13 06:03:55.190781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.076 [2024-07-13 06:03:55.190785] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62b700) on tqpair=0x5f2e60 00:17:04.076 [2024-07-13 06:03:55.190798] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:04.076 [2024-07-13 06:03:55.190804] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:04.076 [2024-07-13 06:03:55.190809] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:04.076 [2024-07-13 06:03:55.190814] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:04.076 [2024-07-13 06:03:55.190819] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:04.076 [2024-07-13 06:03:55.190824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:04.076 [2024-07-13 06:03:55.190834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:04.076 [2024-07-13 06:03:55.190842] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.190859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.076 [2024-07-13 06:03:55.190880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62b700, cid 0, qid 0 00:17:04.076 [2024-07-13 06:03:55.190958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.076 [2024-07-13 06:03:55.190966] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.076 [2024-07-13 06:03:55.190970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62b700) on tqpair=0x5f2e60 00:17:04.076 [2024-07-13 06:03:55.190982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.190990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.190997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.076 [2024-07-13 06:03:55.191004] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.191009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.191013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.191019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.076 [2024-07-13 06:03:55.191026] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.191030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.191034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.191040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.076 [2024-07-13 06:03:55.191047] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.191051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.191055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.191062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.076 [2024-07-13 06:03:55.191067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:04.076 [2024-07-13 06:03:55.191081] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:04.076 [2024-07-13 06:03:55.191089] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.191094] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.191101] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.076 [2024-07-13 06:03:55.191122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62b700, cid 0, qid 0 00:17:04.076 [2024-07-13 06:03:55.191129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62b880, cid 1, qid 0 00:17:04.076 [2024-07-13 06:03:55.191135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62ba00, cid 2, qid 0 00:17:04.076 [2024-07-13 06:03:55.191140] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.076 [2024-07-13 06:03:55.191145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bd00, cid 4, qid 0 00:17:04.076 [2024-07-13 06:03:55.191274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.076 [2024-07-13 06:03:55.191290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.076 [2024-07-13 06:03:55.191295] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.191300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bd00) on tqpair=0x5f2e60 00:17:04.076 [2024-07-13 06:03:55.191306] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:04.076 [2024-07-13 06:03:55.191316] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:04.076 [2024-07-13 06:03:55.191326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:04.076 [2024-07-13 06:03:55.191333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:04.076 [2024-07-13 06:03:55.191341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.191345] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.191350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.191357] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.076 [2024-07-13 06:03:55.191393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bd00, cid 4, qid 0 00:17:04.076 [2024-07-13 06:03:55.191475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.076 [2024-07-13 06:03:55.191483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.076 [2024-07-13 06:03:55.191487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.191492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bd00) on tqpair=0x5f2e60 00:17:04.076 [2024-07-13 06:03:55.191556] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:04.076 [2024-07-13 06:03:55.191567] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:04.076 [2024-07-13 06:03:55.191576] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.191581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5f2e60) 00:17:04.076 [2024-07-13 06:03:55.191589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.076 [2024-07-13 06:03:55.191608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bd00, cid 4, qid 0 00:17:04.076 [2024-07-13 06:03:55.191698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.076 [2024-07-13 06:03:55.191710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.076 [2024-07-13 06:03:55.191715] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.191719] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5f2e60): datao=0, datal=4096, cccid=4 00:17:04.076 [2024-07-13 06:03:55.191724] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62bd00) on tqpair(0x5f2e60): expected_datao=0, payload_size=4096 00:17:04.076 [2024-07-13 06:03:55.191730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.076 [2024-07-13 06:03:55.191737] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.191742] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.191751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.077 [2024-07-13 06:03:55.191757] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.077 [2024-07-13 06:03:55.191761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.191766] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bd00) on tqpair=0x5f2e60 00:17:04.077 [2024-07-13 06:03:55.191781] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:04.077 [2024-07-13 06:03:55.191792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:04.077 [2024-07-13 06:03:55.191803] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:04.077 [2024-07-13 06:03:55.191811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.191815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5f2e60) 00:17:04.077 [2024-07-13 06:03:55.191823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.077 [2024-07-13 06:03:55.191844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bd00, cid 4, qid 0 00:17:04.077 [2024-07-13 06:03:55.191939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.077 [2024-07-13 06:03:55.191947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.077 [2024-07-13 06:03:55.191951] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.191955] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5f2e60): datao=0, datal=4096, cccid=4 00:17:04.077 [2024-07-13 06:03:55.191960] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62bd00) on tqpair(0x5f2e60): expected_datao=0, payload_size=4096 00:17:04.077 [2024-07-13 06:03:55.191965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.191973] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.191977] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.191986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.077 [2024-07-13 06:03:55.191992] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.077 [2024-07-13 06:03:55.191996] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bd00) on tqpair=0x5f2e60 00:17:04.077 [2024-07-13 06:03:55.192016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:04.077 [2024-07-13 06:03:55.192027] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:04.077 [2024-07-13 06:03:55.192036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5f2e60) 00:17:04.077 [2024-07-13 06:03:55.192048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.077 [2024-07-13 06:03:55.192068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bd00, cid 4, qid 0 00:17:04.077 [2024-07-13 06:03:55.192154] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.077 [2024-07-13 06:03:55.192161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.077 [2024-07-13 06:03:55.192165] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192169] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5f2e60): datao=0, datal=4096, cccid=4 00:17:04.077 [2024-07-13 06:03:55.192174] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62bd00) on tqpair(0x5f2e60): expected_datao=0, payload_size=4096 00:17:04.077 [2024-07-13 06:03:55.192179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192186] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192191] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.077 [2024-07-13 06:03:55.192205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.077 [2024-07-13 06:03:55.192209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192214] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bd00) on tqpair=0x5f2e60 00:17:04.077 [2024-07-13 06:03:55.192223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:04.077 [2024-07-13 06:03:55.192232] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:04.077 [2024-07-13 06:03:55.192243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:04.077 [2024-07-13 06:03:55.192257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:04.077 [2024-07-13 06:03:55.192263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:04.077 [2024-07-13 06:03:55.192269] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:04.077 [2024-07-13 06:03:55.192275] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:04.077 [2024-07-13 06:03:55.192280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:04.077 [2024-07-13 06:03:55.192286] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:04.077 [2024-07-13 06:03:55.192302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5f2e60) 00:17:04.077 [2024-07-13 06:03:55.192315] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.077 [2024-07-13 06:03:55.192323] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5f2e60) 00:17:04.077 [2024-07-13 06:03:55.192338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.077 [2024-07-13 06:03:55.192363] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bd00, cid 4, qid 0 00:17:04.077 [2024-07-13 06:03:55.192385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62be80, cid 5, qid 0 00:17:04.077 [2024-07-13 06:03:55.192480] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.077 [2024-07-13 06:03:55.192490] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.077 [2024-07-13 06:03:55.192494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bd00) on tqpair=0x5f2e60 00:17:04.077 [2024-07-13 06:03:55.192505] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.077 [2024-07-13 06:03:55.192512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.077 [2024-07-13 06:03:55.192515] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62be80) on tqpair=0x5f2e60 00:17:04.077 [2024-07-13 06:03:55.192531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5f2e60) 00:17:04.077 [2024-07-13 06:03:55.192544] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.077 [2024-07-13 06:03:55.192566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62be80, cid 5, qid 0 00:17:04.077 [2024-07-13 06:03:55.192629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.077 [2024-07-13 06:03:55.192637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.077 [2024-07-13 06:03:55.192641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62be80) on tqpair=0x5f2e60 00:17:04.077 [2024-07-13 06:03:55.192656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5f2e60) 00:17:04.077 [2024-07-13 06:03:55.192669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.077 [2024-07-13 06:03:55.192686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62be80, cid 5, qid 0 00:17:04.077 [2024-07-13 06:03:55.192748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.077 [2024-07-13 06:03:55.192756] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.077 [2024-07-13 06:03:55.192760] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62be80) on tqpair=0x5f2e60 00:17:04.077 [2024-07-13 06:03:55.192775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5f2e60) 00:17:04.077 [2024-07-13 06:03:55.192787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.077 [2024-07-13 06:03:55.192805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62be80, cid 5, qid 0 00:17:04.077 [2024-07-13 06:03:55.192866] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.077 [2024-07-13 06:03:55.192874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.077 [2024-07-13 06:03:55.192878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62be80) on tqpair=0x5f2e60 00:17:04.077 [2024-07-13 06:03:55.192900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5f2e60) 00:17:04.077 [2024-07-13 06:03:55.192914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.077 [2024-07-13 06:03:55.192922] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5f2e60) 00:17:04.077 [2024-07-13 06:03:55.192933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.077 [2024-07-13 06:03:55.192941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x5f2e60) 00:17:04.077 [2024-07-13 06:03:55.192952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.077 [2024-07-13 06:03:55.192962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.192967] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5f2e60) 00:17:04.077 [2024-07-13 06:03:55.192974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.077 [2024-07-13 06:03:55.192994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62be80, cid 5, qid 0 00:17:04.077 [2024-07-13 06:03:55.193002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bd00, cid 4, qid 0 00:17:04.077 [2024-07-13 06:03:55.193007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62c000, cid 6, qid 0 00:17:04.077 [2024-07-13 06:03:55.193012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62c180, cid 7, qid 0 00:17:04.077 [2024-07-13 06:03:55.193201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.077 [2024-07-13 06:03:55.193217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.077 [2024-07-13 06:03:55.193222] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.193227] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5f2e60): datao=0, datal=8192, cccid=5 00:17:04.077 [2024-07-13 06:03:55.193232] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62be80) on tqpair(0x5f2e60): expected_datao=0, payload_size=8192 00:17:04.077 [2024-07-13 06:03:55.193237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.193255] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.193261] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.193268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.077 [2024-07-13 06:03:55.193274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.077 [2024-07-13 06:03:55.193278] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.193282] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5f2e60): datao=0, datal=512, cccid=4 00:17:04.077 [2024-07-13 06:03:55.193287] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62bd00) on tqpair(0x5f2e60): expected_datao=0, payload_size=512 00:17:04.077 [2024-07-13 06:03:55.193291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.193298] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.193302] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.193308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.077 [2024-07-13 06:03:55.193329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.077 [2024-07-13 06:03:55.193333] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.193337] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5f2e60): datao=0, datal=512, cccid=6 00:17:04.077 [2024-07-13 06:03:55.193342] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62c000) on tqpair(0x5f2e60): expected_datao=0, payload_size=512 00:17:04.077 [2024-07-13 06:03:55.193346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.193353] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.193357] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.193362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.077 [2024-07-13 06:03:55.193368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.077 [2024-07-13 06:03:55.193372] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.193376] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5f2e60): datao=0, datal=4096, cccid=7 00:17:04.077 [2024-07-13 06:03:55.197381] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62c180) on tqpair(0x5f2e60): expected_datao=0, payload_size=4096 00:17:04.077 [2024-07-13 06:03:55.197401] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.197412] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.197417] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.197428] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.077 [2024-07-13 06:03:55.197435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.077 [2024-07-13 06:03:55.197439] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.197443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62be80) on tqpair=0x5f2e60 00:17:04.077 [2024-07-13 06:03:55.197495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.077 [2024-07-13 06:03:55.197502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.077 [2024-07-13 06:03:55.197506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.197510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bd00) on tqpair=0x5f2e60 00:17:04.077 [2024-07-13 06:03:55.197523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.077 [2024-07-13 06:03:55.197529] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.077 [2024-07-13 06:03:55.197533] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.197537] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62c000) on tqpair=0x5f2e60 00:17:04.077 [2024-07-13 06:03:55.197545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.077 [2024-07-13 06:03:55.197552] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.077 [2024-07-13 06:03:55.197556] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.077 [2024-07-13 06:03:55.197560] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62c180) on tqpair=0x5f2e60 00:17:04.077 ===================================================== 00:17:04.077 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:04.078 ===================================================== 00:17:04.078 Controller Capabilities/Features 00:17:04.078 ================================ 00:17:04.078 Vendor ID: 8086 00:17:04.078 Subsystem Vendor ID: 8086 00:17:04.078 Serial Number: SPDK00000000000001 00:17:04.078 Model Number: SPDK bdev Controller 00:17:04.078 Firmware Version: 24.09 00:17:04.078 Recommended Arb Burst: 6 00:17:04.078 IEEE OUI Identifier: e4 d2 5c 00:17:04.078 Multi-path I/O 00:17:04.078 May have multiple subsystem ports: Yes 00:17:04.078 May have multiple controllers: Yes 00:17:04.078 Associated with SR-IOV VF: No 00:17:04.078 Max Data Transfer Size: 131072 00:17:04.078 Max Number of Namespaces: 32 00:17:04.078 Max Number of I/O Queues: 127 00:17:04.078 NVMe Specification Version (VS): 1.3 00:17:04.078 NVMe Specification Version (Identify): 1.3 00:17:04.078 Maximum Queue Entries: 128 00:17:04.078 Contiguous Queues Required: Yes 00:17:04.078 Arbitration Mechanisms Supported 00:17:04.078 Weighted Round Robin: Not Supported 00:17:04.078 Vendor Specific: Not Supported 00:17:04.078 Reset Timeout: 15000 ms 00:17:04.078 Doorbell Stride: 4 bytes 00:17:04.078 NVM Subsystem Reset: Not Supported 00:17:04.078 Command Sets Supported 00:17:04.078 NVM Command Set: Supported 00:17:04.078 Boot Partition: Not Supported 00:17:04.078 Memory Page Size Minimum: 4096 bytes 00:17:04.078 Memory Page Size Maximum: 4096 bytes 00:17:04.078 Persistent Memory Region: Not Supported 00:17:04.078 Optional Asynchronous Events Supported 00:17:04.078 Namespace Attribute Notices: Supported 00:17:04.078 Firmware Activation Notices: Not Supported 00:17:04.078 ANA Change Notices: Not Supported 00:17:04.078 PLE Aggregate Log Change Notices: Not Supported 00:17:04.078 LBA Status Info Alert Notices: Not Supported 00:17:04.078 EGE Aggregate Log Change Notices: Not Supported 00:17:04.078 Normal NVM Subsystem Shutdown event: Not Supported 00:17:04.078 Zone Descriptor Change Notices: Not Supported 00:17:04.078 Discovery Log Change Notices: Not Supported 00:17:04.078 Controller Attributes 00:17:04.078 128-bit Host Identifier: Supported 00:17:04.078 Non-Operational Permissive Mode: Not Supported 00:17:04.078 NVM Sets: Not Supported 00:17:04.078 Read Recovery Levels: Not Supported 00:17:04.078 Endurance Groups: Not Supported 00:17:04.078 Predictable Latency Mode: Not Supported 00:17:04.078 Traffic Based Keep ALive: Not Supported 00:17:04.078 Namespace Granularity: Not Supported 00:17:04.078 SQ Associations: Not Supported 00:17:04.078 UUID List: Not Supported 00:17:04.078 Multi-Domain Subsystem: Not Supported 00:17:04.078 Fixed Capacity Management: Not Supported 00:17:04.078 Variable Capacity Management: Not Supported 00:17:04.078 Delete Endurance Group: Not Supported 00:17:04.078 Delete NVM Set: Not Supported 00:17:04.078 Extended LBA Formats Supported: Not Supported 00:17:04.078 Flexible Data Placement Supported: Not Supported 00:17:04.078 00:17:04.078 Controller Memory Buffer Support 00:17:04.078 ================================ 00:17:04.078 Supported: No 00:17:04.078 00:17:04.078 Persistent Memory Region Support 00:17:04.078 ================================ 00:17:04.078 Supported: No 00:17:04.078 00:17:04.078 Admin Command Set Attributes 00:17:04.078 ============================ 00:17:04.078 Security Send/Receive: Not Supported 00:17:04.078 Format NVM: Not Supported 00:17:04.078 Firmware Activate/Download: Not Supported 00:17:04.078 Namespace Management: Not Supported 00:17:04.078 Device Self-Test: Not Supported 00:17:04.078 Directives: Not Supported 00:17:04.078 NVMe-MI: Not Supported 00:17:04.078 Virtualization Management: Not Supported 00:17:04.078 Doorbell Buffer Config: Not Supported 00:17:04.078 Get LBA Status Capability: Not Supported 00:17:04.078 Command & Feature Lockdown Capability: Not Supported 00:17:04.078 Abort Command Limit: 4 00:17:04.078 Async Event Request Limit: 4 00:17:04.078 Number of Firmware Slots: N/A 00:17:04.078 Firmware Slot 1 Read-Only: N/A 00:17:04.078 Firmware Activation Without Reset: N/A 00:17:04.078 Multiple Update Detection Support: N/A 00:17:04.078 Firmware Update Granularity: No Information Provided 00:17:04.078 Per-Namespace SMART Log: No 00:17:04.078 Asymmetric Namespace Access Log Page: Not Supported 00:17:04.078 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:04.078 Command Effects Log Page: Supported 00:17:04.078 Get Log Page Extended Data: Supported 00:17:04.078 Telemetry Log Pages: Not Supported 00:17:04.078 Persistent Event Log Pages: Not Supported 00:17:04.078 Supported Log Pages Log Page: May Support 00:17:04.078 Commands Supported & Effects Log Page: Not Supported 00:17:04.078 Feature Identifiers & Effects Log Page:May Support 00:17:04.078 NVMe-MI Commands & Effects Log Page: May Support 00:17:04.078 Data Area 4 for Telemetry Log: Not Supported 00:17:04.078 Error Log Page Entries Supported: 128 00:17:04.078 Keep Alive: Supported 00:17:04.078 Keep Alive Granularity: 10000 ms 00:17:04.078 00:17:04.078 NVM Command Set Attributes 00:17:04.078 ========================== 00:17:04.078 Submission Queue Entry Size 00:17:04.078 Max: 64 00:17:04.078 Min: 64 00:17:04.078 Completion Queue Entry Size 00:17:04.078 Max: 16 00:17:04.078 Min: 16 00:17:04.078 Number of Namespaces: 32 00:17:04.078 Compare Command: Supported 00:17:04.078 Write Uncorrectable Command: Not Supported 00:17:04.078 Dataset Management Command: Supported 00:17:04.078 Write Zeroes Command: Supported 00:17:04.078 Set Features Save Field: Not Supported 00:17:04.078 Reservations: Supported 00:17:04.078 Timestamp: Not Supported 00:17:04.078 Copy: Supported 00:17:04.078 Volatile Write Cache: Present 00:17:04.078 Atomic Write Unit (Normal): 1 00:17:04.078 Atomic Write Unit (PFail): 1 00:17:04.078 Atomic Compare & Write Unit: 1 00:17:04.078 Fused Compare & Write: Supported 00:17:04.078 Scatter-Gather List 00:17:04.078 SGL Command Set: Supported 00:17:04.078 SGL Keyed: Supported 00:17:04.078 SGL Bit Bucket Descriptor: Not Supported 00:17:04.078 SGL Metadata Pointer: Not Supported 00:17:04.078 Oversized SGL: Not Supported 00:17:04.078 SGL Metadata Address: Not Supported 00:17:04.078 SGL Offset: Supported 00:17:04.078 Transport SGL Data Block: Not Supported 00:17:04.078 Replay Protected Memory Block: Not Supported 00:17:04.078 00:17:04.078 Firmware Slot Information 00:17:04.078 ========================= 00:17:04.078 Active slot: 1 00:17:04.078 Slot 1 Firmware Revision: 24.09 00:17:04.078 00:17:04.078 00:17:04.078 Commands Supported and Effects 00:17:04.078 ============================== 00:17:04.078 Admin Commands 00:17:04.078 -------------- 00:17:04.078 Get Log Page (02h): Supported 00:17:04.078 Identify (06h): Supported 00:17:04.078 Abort (08h): Supported 00:17:04.078 Set Features (09h): Supported 00:17:04.078 Get Features (0Ah): Supported 00:17:04.078 Asynchronous Event Request (0Ch): Supported 00:17:04.078 Keep Alive (18h): Supported 00:17:04.078 I/O Commands 00:17:04.078 ------------ 00:17:04.078 Flush (00h): Supported LBA-Change 00:17:04.078 Write (01h): Supported LBA-Change 00:17:04.078 Read (02h): Supported 00:17:04.078 Compare (05h): Supported 00:17:04.078 Write Zeroes (08h): Supported LBA-Change 00:17:04.078 Dataset Management (09h): Supported LBA-Change 00:17:04.078 Copy (19h): Supported LBA-Change 00:17:04.078 00:17:04.078 Error Log 00:17:04.078 ========= 00:17:04.078 00:17:04.078 Arbitration 00:17:04.078 =========== 00:17:04.078 Arbitration Burst: 1 00:17:04.078 00:17:04.078 Power Management 00:17:04.078 ================ 00:17:04.078 Number of Power States: 1 00:17:04.078 Current Power State: Power State #0 00:17:04.078 Power State #0: 00:17:04.078 Max Power: 0.00 W 00:17:04.078 Non-Operational State: Operational 00:17:04.078 Entry Latency: Not Reported 00:17:04.078 Exit Latency: Not Reported 00:17:04.078 Relative Read Throughput: 0 00:17:04.078 Relative Read Latency: 0 00:17:04.078 Relative Write Throughput: 0 00:17:04.078 Relative Write Latency: 0 00:17:04.078 Idle Power: Not Reported 00:17:04.078 Active Power: Not Reported 00:17:04.078 Non-Operational Permissive Mode: Not Supported 00:17:04.078 00:17:04.078 Health Information 00:17:04.078 ================== 00:17:04.078 Critical Warnings: 00:17:04.078 Available Spare Space: OK 00:17:04.078 Temperature: OK 00:17:04.078 Device Reliability: OK 00:17:04.078 Read Only: No 00:17:04.078 Volatile Memory Backup: OK 00:17:04.078 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:04.078 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:04.078 Available Spare: 0% 00:17:04.078 Available Spare Threshold: 0% 00:17:04.078 Life Percentage Used:[2024-07-13 06:03:55.197666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.197674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5f2e60) 00:17:04.078 [2024-07-13 06:03:55.197683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.078 [2024-07-13 06:03:55.197713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62c180, cid 7, qid 0 00:17:04.078 [2024-07-13 06:03:55.197782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.078 [2024-07-13 06:03:55.197789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.078 [2024-07-13 06:03:55.197793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.197798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62c180) on tqpair=0x5f2e60 00:17:04.078 [2024-07-13 06:03:55.197839] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:04.078 [2024-07-13 06:03:55.197851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62b700) on tqpair=0x5f2e60 00:17:04.078 [2024-07-13 06:03:55.197859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.078 [2024-07-13 06:03:55.197865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62b880) on tqpair=0x5f2e60 00:17:04.078 [2024-07-13 06:03:55.197870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.078 [2024-07-13 06:03:55.197876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62ba00) on tqpair=0x5f2e60 00:17:04.078 [2024-07-13 06:03:55.197881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.078 [2024-07-13 06:03:55.197886] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.078 [2024-07-13 06:03:55.197891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.078 [2024-07-13 06:03:55.197901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.197905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.197909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.078 [2024-07-13 06:03:55.197918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.078 [2024-07-13 06:03:55.197941] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.078 [2024-07-13 06:03:55.198005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.078 [2024-07-13 06:03:55.198014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.078 [2024-07-13 06:03:55.198018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.198022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.078 [2024-07-13 06:03:55.198031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.198036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.198040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.078 [2024-07-13 06:03:55.198048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.078 [2024-07-13 06:03:55.198071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.078 [2024-07-13 06:03:55.198143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.078 [2024-07-13 06:03:55.198150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.078 [2024-07-13 06:03:55.198154] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.198159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.078 [2024-07-13 06:03:55.198164] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:04.078 [2024-07-13 06:03:55.198169] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:04.078 [2024-07-13 06:03:55.198180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.198184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.198189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.078 [2024-07-13 06:03:55.198196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.078 [2024-07-13 06:03:55.198215] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.078 [2024-07-13 06:03:55.198265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.078 [2024-07-13 06:03:55.198272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.078 [2024-07-13 06:03:55.198276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.198280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.078 [2024-07-13 06:03:55.198292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.198297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.198301] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.078 [2024-07-13 06:03:55.198309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.078 [2024-07-13 06:03:55.198327] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.078 [2024-07-13 06:03:55.198392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.078 [2024-07-13 06:03:55.198401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.078 [2024-07-13 06:03:55.198405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.198410] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.078 [2024-07-13 06:03:55.198421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.198426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.078 [2024-07-13 06:03:55.198431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.198438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.198458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.198503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.198510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.198514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.198529] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198538] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.198546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.198564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.198613] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.198620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.198624] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198629] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.198640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.198656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.198674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.198727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.198734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.198738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.198753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.198770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.198787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.198831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.198839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.198842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198847] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.198858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.198874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.198892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.198936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.198943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.198947] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.198963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.198971] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.198979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.198997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.199044] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.199051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.199055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.199071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199079] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.199087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.199104] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.199151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.199158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.199162] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.199178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.199194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.199212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.199259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.199267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.199271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.199295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.199316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.199336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.199408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.199417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.199421] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.199436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.199453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.199488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.199535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.199543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.199547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.199562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.199579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.199597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.199642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.199649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.199653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.199668] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199673] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.199685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.199703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.199749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.199757] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.199761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.199776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199785] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.199792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.199810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.199856] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.199864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.199868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.199883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.199900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.199918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.199961] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.199968] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.199972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.199987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.199996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.200004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.200021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.200074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.200082] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.200086] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.200090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.200101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.200106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.200110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.200118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.200135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.200178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.200198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.200202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.200207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.200218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.200223] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.200227] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.200235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.200254] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.200298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.200305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.200309] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.200313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.200324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.200329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.200333] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.200341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.200359] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.079 [2024-07-13 06:03:55.200421] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.079 [2024-07-13 06:03:55.200430] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.079 [2024-07-13 06:03:55.200434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.200439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.079 [2024-07-13 06:03:55.200450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.200455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.079 [2024-07-13 06:03:55.200459] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.079 [2024-07-13 06:03:55.200467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.079 [2024-07-13 06:03:55.200487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.080 [2024-07-13 06:03:55.200537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.080 [2024-07-13 06:03:55.200545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.080 [2024-07-13 06:03:55.200549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.200553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.080 [2024-07-13 06:03:55.200564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.200569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.200573] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.080 [2024-07-13 06:03:55.200581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.080 [2024-07-13 06:03:55.200599] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.080 [2024-07-13 06:03:55.200649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.080 [2024-07-13 06:03:55.200656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.080 [2024-07-13 06:03:55.200660] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.200665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.080 [2024-07-13 06:03:55.200675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.200680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.200684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.080 [2024-07-13 06:03:55.200692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.080 [2024-07-13 06:03:55.200710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.080 [2024-07-13 06:03:55.200754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.080 [2024-07-13 06:03:55.200762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.080 [2024-07-13 06:03:55.200766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.200770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.080 [2024-07-13 06:03:55.200781] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.200785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.200789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.080 [2024-07-13 06:03:55.200797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.080 [2024-07-13 06:03:55.200815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.080 [2024-07-13 06:03:55.200861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.080 [2024-07-13 06:03:55.200868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.080 [2024-07-13 06:03:55.200872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.200877] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.080 [2024-07-13 06:03:55.200888] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.200893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.200897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.080 [2024-07-13 06:03:55.200904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.080 [2024-07-13 06:03:55.200923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.080 [2024-07-13 06:03:55.200973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.080 [2024-07-13 06:03:55.200981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.080 [2024-07-13 06:03:55.200985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.200989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.080 [2024-07-13 06:03:55.201000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.201005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.201009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.080 [2024-07-13 06:03:55.201016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.080 [2024-07-13 06:03:55.201034] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.080 [2024-07-13 06:03:55.201083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.080 [2024-07-13 06:03:55.201091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.080 [2024-07-13 06:03:55.201095] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.201099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.080 [2024-07-13 06:03:55.201110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.201115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.201119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.080 [2024-07-13 06:03:55.201126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.080 [2024-07-13 06:03:55.201144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.080 [2024-07-13 06:03:55.201190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.080 [2024-07-13 06:03:55.201198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.080 [2024-07-13 06:03:55.201202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.201206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.080 [2024-07-13 06:03:55.201217] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.201222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.201226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.080 [2024-07-13 06:03:55.201234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.080 [2024-07-13 06:03:55.201251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.080 [2024-07-13 06:03:55.201298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.080 [2024-07-13 06:03:55.201308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.080 [2024-07-13 06:03:55.201312] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.201316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.080 [2024-07-13 06:03:55.201328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.201333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.201337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.080 [2024-07-13 06:03:55.201345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.080 [2024-07-13 06:03:55.201365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.080 [2024-07-13 06:03:55.205407] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.080 [2024-07-13 06:03:55.205416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.080 [2024-07-13 06:03:55.205421] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.205425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.080 [2024-07-13 06:03:55.205441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.205447] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.205451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5f2e60) 00:17:04.080 [2024-07-13 06:03:55.205461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.080 [2024-07-13 06:03:55.205487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62bb80, cid 3, qid 0 00:17:04.080 [2024-07-13 06:03:55.205542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.080 [2024-07-13 06:03:55.205549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.080 [2024-07-13 06:03:55.205553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.080 [2024-07-13 06:03:55.205558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62bb80) on tqpair=0x5f2e60 00:17:04.080 [2024-07-13 06:03:55.205566] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:17:04.080 0% 00:17:04.080 Data Units Read: 0 00:17:04.080 Data Units Written: 0 00:17:04.080 Host Read Commands: 0 00:17:04.080 Host Write Commands: 0 00:17:04.080 Controller Busy Time: 0 minutes 00:17:04.080 Power Cycles: 0 00:17:04.080 Power On Hours: 0 hours 00:17:04.080 Unsafe Shutdowns: 0 00:17:04.080 Unrecoverable Media Errors: 0 00:17:04.080 Lifetime Error Log Entries: 0 00:17:04.080 Warning Temperature Time: 0 minutes 00:17:04.080 Critical Temperature Time: 0 minutes 00:17:04.080 00:17:04.080 Number of Queues 00:17:04.080 ================ 00:17:04.080 Number of I/O Submission Queues: 127 00:17:04.080 Number of I/O Completion Queues: 127 00:17:04.080 00:17:04.080 Active Namespaces 00:17:04.080 ================= 00:17:04.080 Namespace ID:1 00:17:04.080 Error Recovery Timeout: Unlimited 00:17:04.080 Command Set Identifier: NVM (00h) 00:17:04.080 Deallocate: Supported 00:17:04.080 Deallocated/Unwritten Error: Not Supported 00:17:04.080 Deallocated Read Value: Unknown 00:17:04.080 Deallocate in Write Zeroes: Not Supported 00:17:04.080 Deallocated Guard Field: 0xFFFF 00:17:04.080 Flush: Supported 00:17:04.080 Reservation: Supported 00:17:04.080 Namespace Sharing Capabilities: Multiple Controllers 00:17:04.080 Size (in LBAs): 131072 (0GiB) 00:17:04.080 Capacity (in LBAs): 131072 (0GiB) 00:17:04.080 Utilization (in LBAs): 131072 (0GiB) 00:17:04.080 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:04.080 EUI64: ABCDEF0123456789 00:17:04.080 UUID: 3c1a25a7-a2dc-4c60-a622-d0f019bd6ead 00:17:04.080 Thin Provisioning: Not Supported 00:17:04.080 Per-NS Atomic Units: Yes 00:17:04.080 Atomic Boundary Size (Normal): 0 00:17:04.080 Atomic Boundary Size (PFail): 0 00:17:04.080 Atomic Boundary Offset: 0 00:17:04.080 Maximum Single Source Range Length: 65535 00:17:04.080 Maximum Copy Length: 65535 00:17:04.080 Maximum Source Range Count: 1 00:17:04.080 NGUID/EUI64 Never Reused: No 00:17:04.080 Namespace Write Protected: No 00:17:04.080 Number of LBA Formats: 1 00:17:04.080 Current LBA Format: LBA Format #00 00:17:04.080 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:04.080 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:04.080 rmmod nvme_tcp 00:17:04.080 rmmod nvme_fabrics 00:17:04.080 rmmod nvme_keyring 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 88202 ']' 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 88202 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 88202 ']' 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 88202 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88202 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:04.080 killing process with pid 88202 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88202' 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 88202 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 88202 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:04.080 00:17:04.080 real 0m1.660s 00:17:04.080 user 0m3.723s 00:17:04.080 sys 0m0.579s 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:04.080 06:03:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.080 ************************************ 00:17:04.080 END TEST nvmf_identify 00:17:04.080 ************************************ 00:17:04.080 06:03:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:04.080 06:03:55 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:04.080 06:03:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:04.080 06:03:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.080 06:03:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:04.080 ************************************ 00:17:04.080 START TEST nvmf_perf 00:17:04.080 ************************************ 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:04.080 * Looking for test storage... 00:17:04.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.080 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:04.081 Cannot find device "nvmf_tgt_br" 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:04.081 Cannot find device "nvmf_tgt_br2" 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:04.081 Cannot find device "nvmf_tgt_br" 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:04.081 Cannot find device "nvmf_tgt_br2" 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:17:04.081 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:04.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:04.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:04.338 06:03:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:04.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:17:04.338 00:17:04.338 --- 10.0.0.2 ping statistics --- 00:17:04.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.338 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:04.338 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:04.338 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:17:04.338 00:17:04.338 --- 10.0.0.3 ping statistics --- 00:17:04.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.338 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:04.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:04.338 00:17:04.338 --- 10.0.0.1 ping statistics --- 00:17:04.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.338 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:04.338 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:04.597 06:03:56 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:04.597 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:04.597 06:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:04.597 06:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:04.597 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=88398 00:17:04.597 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 88398 00:17:04.597 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:04.597 06:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 88398 ']' 00:17:04.598 06:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.598 06:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.598 06:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.598 06:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.598 06:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:04.598 [2024-07-13 06:03:56.152307] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:04.598 [2024-07-13 06:03:56.152426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.598 [2024-07-13 06:03:56.295552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.857 [2024-07-13 06:03:56.341823] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.857 [2024-07-13 06:03:56.341887] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.857 [2024-07-13 06:03:56.341901] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.857 [2024-07-13 06:03:56.341911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.857 [2024-07-13 06:03:56.341920] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.857 [2024-07-13 06:03:56.342086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.857 [2024-07-13 06:03:56.342229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.857 [2024-07-13 06:03:56.342361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.857 [2024-07-13 06:03:56.342362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.857 [2024-07-13 06:03:56.375661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:04.857 06:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.857 06:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:17:04.857 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:04.857 06:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:04.857 06:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:04.857 06:03:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.857 06:03:56 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:04.857 06:03:56 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:05.423 06:03:56 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:05.423 06:03:56 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:05.423 06:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:05.423 06:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:05.682 06:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:05.682 06:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:05.682 06:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:05.682 06:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:05.682 06:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:05.940 [2024-07-13 06:03:57.633116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.940 06:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:06.529 06:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:06.529 06:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:06.529 06:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:06.529 06:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:06.802 06:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.060 [2024-07-13 06:03:58.598285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.060 06:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:07.317 06:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:07.317 06:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:07.317 06:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:07.317 06:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:08.690 Initializing NVMe Controllers 00:17:08.690 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:08.690 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:08.690 Initialization complete. Launching workers. 00:17:08.690 ======================================================== 00:17:08.690 Latency(us) 00:17:08.690 Device Information : IOPS MiB/s Average min max 00:17:08.690 PCIE (0000:00:10.0) NSID 1 from core 0: 23936.98 93.50 1336.79 257.76 8075.44 00:17:08.690 ======================================================== 00:17:08.691 Total : 23936.98 93.50 1336.79 257.76 8075.44 00:17:08.691 00:17:08.691 06:04:00 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:09.624 Initializing NVMe Controllers 00:17:09.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:09.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:09.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:09.624 Initialization complete. Launching workers. 00:17:09.624 ======================================================== 00:17:09.624 Latency(us) 00:17:09.624 Device Information : IOPS MiB/s Average min max 00:17:09.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3622.33 14.15 274.65 106.46 4729.07 00:17:09.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.49 0.49 8030.74 4545.14 12012.50 00:17:09.624 ======================================================== 00:17:09.624 Total : 3747.82 14.64 534.35 106.46 12012.50 00:17:09.624 00:17:09.881 06:04:01 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:11.257 Initializing NVMe Controllers 00:17:11.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:11.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:11.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:11.257 Initialization complete. Launching workers. 00:17:11.257 ======================================================== 00:17:11.257 Latency(us) 00:17:11.257 Device Information : IOPS MiB/s Average min max 00:17:11.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8874.29 34.67 3605.62 543.26 10069.75 00:17:11.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3969.21 15.50 8074.19 5926.72 15670.08 00:17:11.257 ======================================================== 00:17:11.257 Total : 12843.50 50.17 4986.61 543.26 15670.08 00:17:11.257 00:17:11.257 06:04:02 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:11.257 06:04:02 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:13.786 Initializing NVMe Controllers 00:17:13.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:13.786 Controller IO queue size 128, less than required. 00:17:13.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:13.786 Controller IO queue size 128, less than required. 00:17:13.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:13.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:13.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:13.786 Initialization complete. Launching workers. 00:17:13.786 ======================================================== 00:17:13.786 Latency(us) 00:17:13.786 Device Information : IOPS MiB/s Average min max 00:17:13.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1823.50 455.88 71738.79 44620.53 115108.00 00:17:13.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 671.50 167.88 195090.01 55934.08 339745.54 00:17:13.786 ======================================================== 00:17:13.786 Total : 2495.00 623.75 104937.33 44620.53 339745.54 00:17:13.786 00:17:13.786 06:04:05 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:13.786 Initializing NVMe Controllers 00:17:13.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:13.786 Controller IO queue size 128, less than required. 00:17:13.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:13.786 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:13.786 Controller IO queue size 128, less than required. 00:17:13.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:13.786 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:13.786 WARNING: Some requested NVMe devices were skipped 00:17:13.786 No valid NVMe controllers or AIO or URING devices found 00:17:13.786 06:04:05 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:16.318 Initializing NVMe Controllers 00:17:16.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:16.318 Controller IO queue size 128, less than required. 00:17:16.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.318 Controller IO queue size 128, less than required. 00:17:16.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:16.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:16.318 Initialization complete. Launching workers. 00:17:16.318 00:17:16.318 ==================== 00:17:16.318 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:16.318 TCP transport: 00:17:16.318 polls: 9257 00:17:16.318 idle_polls: 4709 00:17:16.318 sock_completions: 4548 00:17:16.318 nvme_completions: 7091 00:17:16.318 submitted_requests: 10688 00:17:16.318 queued_requests: 1 00:17:16.318 00:17:16.318 ==================== 00:17:16.318 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:16.318 TCP transport: 00:17:16.318 polls: 10071 00:17:16.318 idle_polls: 5394 00:17:16.318 sock_completions: 4677 00:17:16.318 nvme_completions: 6893 00:17:16.318 submitted_requests: 10342 00:17:16.318 queued_requests: 1 00:17:16.318 ======================================================== 00:17:16.318 Latency(us) 00:17:16.318 Device Information : IOPS MiB/s Average min max 00:17:16.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1768.61 442.15 74101.11 42301.46 102801.50 00:17:16.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1719.22 429.80 75221.89 31984.92 119151.69 00:17:16.318 ======================================================== 00:17:16.318 Total : 3487.83 871.96 74653.57 31984.92 119151.69 00:17:16.318 00:17:16.318 06:04:07 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:16.318 06:04:08 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:16.883 06:04:08 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:17:16.883 06:04:08 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:17:16.883 06:04:08 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:17:17.142 06:04:08 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=cbf40ee4-5606-45a3-9cbb-4ddfcd3d7dd0 00:17:17.142 06:04:08 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb cbf40ee4-5606-45a3-9cbb-4ddfcd3d7dd0 00:17:17.142 06:04:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=cbf40ee4-5606-45a3-9cbb-4ddfcd3d7dd0 00:17:17.142 06:04:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:17:17.142 06:04:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:17:17.142 06:04:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:17:17.142 06:04:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:17.400 06:04:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:17:17.400 { 00:17:17.400 "uuid": "cbf40ee4-5606-45a3-9cbb-4ddfcd3d7dd0", 00:17:17.400 "name": "lvs_0", 00:17:17.400 "base_bdev": "Nvme0n1", 00:17:17.400 "total_data_clusters": 1278, 00:17:17.400 "free_clusters": 1278, 00:17:17.400 "block_size": 4096, 00:17:17.400 "cluster_size": 4194304 00:17:17.400 } 00:17:17.400 ]' 00:17:17.400 06:04:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="cbf40ee4-5606-45a3-9cbb-4ddfcd3d7dd0") .free_clusters' 00:17:17.400 06:04:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:17:17.401 06:04:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="cbf40ee4-5606-45a3-9cbb-4ddfcd3d7dd0") .cluster_size' 00:17:17.401 5112 00:17:17.401 06:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:17:17.401 06:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:17:17.401 06:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:17:17.401 06:04:09 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:17:17.401 06:04:09 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cbf40ee4-5606-45a3-9cbb-4ddfcd3d7dd0 lbd_0 5112 00:17:17.659 06:04:09 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=c2f85c29-0e02-4e81-8972-caff96a5ec96 00:17:17.659 06:04:09 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore c2f85c29-0e02-4e81-8972-caff96a5ec96 lvs_n_0 00:17:18.226 06:04:09 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=69255b0b-0327-4756-807c-9f68314b20a1 00:17:18.226 06:04:09 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 69255b0b-0327-4756-807c-9f68314b20a1 00:17:18.226 06:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=69255b0b-0327-4756-807c-9f68314b20a1 00:17:18.226 06:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:17:18.226 06:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:17:18.226 06:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:17:18.226 06:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:18.484 06:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:17:18.484 { 00:17:18.484 "uuid": "cbf40ee4-5606-45a3-9cbb-4ddfcd3d7dd0", 00:17:18.484 "name": "lvs_0", 00:17:18.484 "base_bdev": "Nvme0n1", 00:17:18.484 "total_data_clusters": 1278, 00:17:18.484 "free_clusters": 0, 00:17:18.484 "block_size": 4096, 00:17:18.484 "cluster_size": 4194304 00:17:18.484 }, 00:17:18.484 { 00:17:18.484 "uuid": "69255b0b-0327-4756-807c-9f68314b20a1", 00:17:18.484 "name": "lvs_n_0", 00:17:18.484 "base_bdev": "c2f85c29-0e02-4e81-8972-caff96a5ec96", 00:17:18.484 "total_data_clusters": 1276, 00:17:18.484 "free_clusters": 1276, 00:17:18.484 "block_size": 4096, 00:17:18.484 "cluster_size": 4194304 00:17:18.484 } 00:17:18.484 ]' 00:17:18.484 06:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="69255b0b-0327-4756-807c-9f68314b20a1") .free_clusters' 00:17:18.484 06:04:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:17:18.484 06:04:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="69255b0b-0327-4756-807c-9f68314b20a1") .cluster_size' 00:17:18.484 5104 00:17:18.484 06:04:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:17:18.484 06:04:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:17:18.484 06:04:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:17:18.484 06:04:10 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:17:18.484 06:04:10 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 69255b0b-0327-4756-807c-9f68314b20a1 lbd_nest_0 5104 00:17:18.743 06:04:10 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=a9d5e50d-364f-4f20-a93f-34d365216770 00:17:18.743 06:04:10 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:19.001 06:04:10 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:17:19.001 06:04:10 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a9d5e50d-364f-4f20-a93f-34d365216770 00:17:19.259 06:04:10 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.519 06:04:11 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:17:19.519 06:04:11 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:17:19.519 06:04:11 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:17:19.519 06:04:11 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:19.519 06:04:11 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:19.778 Initializing NVMe Controllers 00:17:19.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:19.778 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:17:19.778 WARNING: Some requested NVMe devices were skipped 00:17:19.778 No valid NVMe controllers or AIO or URING devices found 00:17:19.778 06:04:11 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:19.778 06:04:11 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:31.986 Initializing NVMe Controllers 00:17:31.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:31.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:31.986 Initialization complete. Launching workers. 00:17:31.986 ======================================================== 00:17:31.986 Latency(us) 00:17:31.986 Device Information : IOPS MiB/s Average min max 00:17:31.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1020.50 127.56 979.16 332.11 12281.38 00:17:31.986 ======================================================== 00:17:31.986 Total : 1020.50 127.56 979.16 332.11 12281.38 00:17:31.986 00:17:31.986 06:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:17:31.986 06:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:31.986 06:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:31.986 Initializing NVMe Controllers 00:17:31.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:31.986 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:17:31.986 WARNING: Some requested NVMe devices were skipped 00:17:31.986 No valid NVMe controllers or AIO or URING devices found 00:17:31.986 06:04:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:31.986 06:04:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:41.957 Initializing NVMe Controllers 00:17:41.957 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:41.957 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:41.957 Initialization complete. Launching workers. 00:17:41.957 ======================================================== 00:17:41.957 Latency(us) 00:17:41.957 Device Information : IOPS MiB/s Average min max 00:17:41.957 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1338.15 167.27 23950.95 7661.07 59246.13 00:17:41.957 ======================================================== 00:17:41.957 Total : 1338.15 167.27 23950.95 7661.07 59246.13 00:17:41.957 00:17:41.957 06:04:32 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:17:41.957 06:04:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:41.957 06:04:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:41.957 Initializing NVMe Controllers 00:17:41.957 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:41.957 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:17:41.957 WARNING: Some requested NVMe devices were skipped 00:17:41.957 No valid NVMe controllers or AIO or URING devices found 00:17:41.957 06:04:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:41.957 06:04:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:51.936 Initializing NVMe Controllers 00:17:51.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:51.936 Controller IO queue size 128, less than required. 00:17:51.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:51.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:51.937 Initialization complete. Launching workers. 00:17:51.937 ======================================================== 00:17:51.937 Latency(us) 00:17:51.937 Device Information : IOPS MiB/s Average min max 00:17:51.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4064.00 508.00 31556.39 11846.79 70639.01 00:17:51.937 ======================================================== 00:17:51.937 Total : 4064.00 508.00 31556.39 11846.79 70639.01 00:17:51.937 00:17:51.937 06:04:42 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.937 06:04:43 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a9d5e50d-364f-4f20-a93f-34d365216770 00:17:51.937 06:04:43 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:17:52.195 06:04:43 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c2f85c29-0e02-4e81-8972-caff96a5ec96 00:17:52.454 06:04:44 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:52.712 rmmod nvme_tcp 00:17:52.712 rmmod nvme_fabrics 00:17:52.712 rmmod nvme_keyring 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 88398 ']' 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 88398 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 88398 ']' 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 88398 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88398 00:17:52.712 killing process with pid 88398 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88398' 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 88398 00:17:52.712 06:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 88398 00:17:54.090 06:04:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:54.090 06:04:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:54.090 06:04:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:54.090 06:04:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.090 06:04:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:54.090 06:04:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.090 06:04:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.090 06:04:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.090 06:04:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:54.090 ************************************ 00:17:54.090 END TEST nvmf_perf 00:17:54.090 ************************************ 00:17:54.090 00:17:54.090 real 0m50.033s 00:17:54.090 user 3m7.928s 00:17:54.090 sys 0m12.640s 00:17:54.090 06:04:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:54.090 06:04:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:54.090 06:04:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:54.090 06:04:45 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:54.090 06:04:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:54.090 06:04:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.090 06:04:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:54.090 ************************************ 00:17:54.090 START TEST nvmf_fio_host 00:17:54.090 ************************************ 00:17:54.090 06:04:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:54.090 * Looking for test storage... 00:17:54.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:54.090 06:04:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:54.090 06:04:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.090 06:04:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.090 06:04:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.091 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:54.349 Cannot find device "nvmf_tgt_br" 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:54.349 Cannot find device "nvmf_tgt_br2" 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:54.349 Cannot find device "nvmf_tgt_br" 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:54.349 Cannot find device "nvmf_tgt_br2" 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:54.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:54.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:54.349 06:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:54.349 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:54.349 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:54.349 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:54.349 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:54.349 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:54.349 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:54.349 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:54.349 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:54.349 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:54.349 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:54.349 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:54.349 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:54.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:17:54.607 00:17:54.607 --- 10.0.0.2 ping statistics --- 00:17:54.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.607 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:54.607 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:54.607 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:17:54.607 00:17:54.607 --- 10.0.0.3 ping statistics --- 00:17:54.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.607 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:54.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:54.607 00:17:54.607 --- 10.0.0.1 ping statistics --- 00:17:54.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.607 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:54.607 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=89204 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 89204 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 89204 ']' 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.608 06:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.608 [2024-07-13 06:04:46.246519] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:54.608 [2024-07-13 06:04:46.246628] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.867 [2024-07-13 06:04:46.391422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:54.867 [2024-07-13 06:04:46.439975] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.867 [2024-07-13 06:04:46.440337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.867 [2024-07-13 06:04:46.440527] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.867 [2024-07-13 06:04:46.440682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.867 [2024-07-13 06:04:46.440724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.867 [2024-07-13 06:04:46.440991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.867 [2024-07-13 06:04:46.441124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.867 [2024-07-13 06:04:46.441197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.867 [2024-07-13 06:04:46.441196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.867 [2024-07-13 06:04:46.476198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:54.867 06:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.867 06:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:17:54.867 06:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:55.125 [2024-07-13 06:04:46.742837] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.125 06:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:55.125 06:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:55.125 06:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.125 06:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:55.383 Malloc1 00:17:55.383 06:04:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:55.947 06:04:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:55.947 06:04:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.204 [2024-07-13 06:04:47.780886] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.204 06:04:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:56.461 06:04:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:56.719 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:56.719 fio-3.35 00:17:56.719 Starting 1 thread 00:17:59.245 00:17:59.245 test: (groupid=0, jobs=1): err= 0: pid=89274: Sat Jul 13 06:04:50 2024 00:17:59.245 read: IOPS=8921, BW=34.9MiB/s (36.5MB/s)(70.0MiB/2008msec) 00:17:59.245 slat (nsec): min=1829, max=240180, avg=2530.94, stdev=2699.57 00:17:59.245 clat (usec): min=1778, max=14817, avg=7462.31, stdev=702.14 00:17:59.245 lat (usec): min=1821, max=14820, avg=7464.84, stdev=702.01 00:17:59.245 clat percentiles (usec): 00:17:59.245 | 1.00th=[ 6194], 5.00th=[ 6587], 10.00th=[ 6718], 20.00th=[ 6980], 00:17:59.245 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:17:59.245 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8586], 00:17:59.246 | 99.00th=[ 9896], 99.50th=[10421], 99.90th=[12649], 99.95th=[13960], 00:17:59.246 | 99.99th=[14746] 00:17:59.246 bw ( KiB/s): min=33816, max=37048, per=100.00%, avg=35702.00, stdev=1367.64, samples=4 00:17:59.246 iops : min= 8454, max= 9262, avg=8925.50, stdev=341.91, samples=4 00:17:59.246 write: IOPS=8936, BW=34.9MiB/s (36.6MB/s)(70.1MiB/2008msec); 0 zone resets 00:17:59.246 slat (nsec): min=1874, max=151558, avg=2616.43, stdev=2020.69 00:17:59.246 clat (usec): min=1672, max=14790, avg=6815.07, stdev=648.13 00:17:59.246 lat (usec): min=1682, max=14792, avg=6817.69, stdev=648.12 00:17:59.246 clat percentiles (usec): 00:17:59.246 | 1.00th=[ 5604], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:17:59.246 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:17:59.246 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7504], 95.00th=[ 7767], 00:17:59.246 | 99.00th=[ 8979], 99.50th=[ 9503], 99.90th=[12518], 99.95th=[13960], 00:17:59.246 | 99.99th=[14746] 00:17:59.246 bw ( KiB/s): min=34112, max=37064, per=100.00%, avg=35764.00, stdev=1320.37, samples=4 00:17:59.246 iops : min= 8528, max= 9266, avg=8941.00, stdev=330.09, samples=4 00:17:59.246 lat (msec) : 2=0.03%, 4=0.13%, 10=99.24%, 20=0.60% 00:17:59.246 cpu : usr=69.21%, sys=22.97%, ctx=4, majf=0, minf=7 00:17:59.246 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:59.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:59.246 issued rwts: total=17915,17945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.246 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:59.246 00:17:59.246 Run status group 0 (all jobs): 00:17:59.246 READ: bw=34.9MiB/s (36.5MB/s), 34.9MiB/s-34.9MiB/s (36.5MB/s-36.5MB/s), io=70.0MiB (73.4MB), run=2008-2008msec 00:17:59.246 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.1MiB (73.5MB), run=2008-2008msec 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:59.246 06:04:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:59.246 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:59.246 fio-3.35 00:17:59.246 Starting 1 thread 00:18:01.777 00:18:01.777 test: (groupid=0, jobs=1): err= 0: pid=89323: Sat Jul 13 06:04:53 2024 00:18:01.777 read: IOPS=7862, BW=123MiB/s (129MB/s)(247MiB/2010msec) 00:18:01.777 slat (usec): min=3, max=147, avg= 4.24, stdev= 2.70 00:18:01.777 clat (usec): min=2464, max=18243, avg=8938.17, stdev=2719.57 00:18:01.777 lat (usec): min=2467, max=18246, avg=8942.41, stdev=2719.63 00:18:01.777 clat percentiles (usec): 00:18:01.777 | 1.00th=[ 4490], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6521], 00:18:01.777 | 30.00th=[ 7177], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9241], 00:18:01.777 | 70.00th=[10159], 80.00th=[11207], 90.00th=[12256], 95.00th=[14222], 00:18:01.777 | 99.00th=[16581], 99.50th=[17433], 99.90th=[17957], 99.95th=[17957], 00:18:01.777 | 99.99th=[18220] 00:18:01.777 bw ( KiB/s): min=59040, max=71217, per=50.75%, avg=63844.25, stdev=5880.71, samples=4 00:18:01.777 iops : min= 3690, max= 4451, avg=3990.25, stdev=367.52, samples=4 00:18:01.777 write: IOPS=4590, BW=71.7MiB/s (75.2MB/s)(131MiB/1822msec); 0 zone resets 00:18:01.777 slat (usec): min=34, max=355, avg=42.06, stdev= 9.38 00:18:01.777 clat (usec): min=4213, max=22217, avg=12961.08, stdev=2092.75 00:18:01.777 lat (usec): min=4251, max=22255, avg=13003.13, stdev=2093.23 00:18:01.777 clat percentiles (usec): 00:18:01.777 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[10552], 20.00th=[11207], 00:18:01.777 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12780], 60.00th=[13304], 00:18:01.777 | 70.00th=[13960], 80.00th=[14746], 90.00th=[15664], 95.00th=[16581], 00:18:01.777 | 99.00th=[18744], 99.50th=[19792], 99.90th=[21627], 99.95th=[21627], 00:18:01.777 | 99.99th=[22152] 00:18:01.777 bw ( KiB/s): min=61728, max=73421, per=90.52%, avg=66475.25, stdev=5575.81, samples=4 00:18:01.777 iops : min= 3858, max= 4588, avg=4154.50, stdev=348.15, samples=4 00:18:01.777 lat (msec) : 4=0.28%, 10=46.21%, 20=53.37%, 50=0.14% 00:18:01.777 cpu : usr=80.69%, sys=14.29%, ctx=5, majf=0, minf=3 00:18:01.777 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:01.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:01.777 issued rwts: total=15803,8363,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.777 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:01.777 00:18:01.777 Run status group 0 (all jobs): 00:18:01.777 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=247MiB (259MB), run=2010-2010msec 00:18:01.777 WRITE: bw=71.7MiB/s (75.2MB/s), 71.7MiB/s-71.7MiB/s (75.2MB/s-75.2MB/s), io=131MiB (137MB), run=1822-1822msec 00:18:01.777 06:04:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:01.777 06:04:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:18:01.777 06:04:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:18:01.777 06:04:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:18:01.777 06:04:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:18:01.777 06:04:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:18:01.777 06:04:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:01.777 06:04:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:18:01.777 06:04:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:01.777 06:04:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:18:01.777 06:04:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:18:01.777 06:04:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:18:02.035 Nvme0n1 00:18:02.035 06:04:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:18:02.293 06:04:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=3ceba30c-141b-452b-81ad-f3d940afe63d 00:18:02.293 06:04:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 3ceba30c-141b-452b-81ad-f3d940afe63d 00:18:02.293 06:04:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=3ceba30c-141b-452b-81ad-f3d940afe63d 00:18:02.293 06:04:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:02.294 06:04:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:02.294 06:04:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:02.294 06:04:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:02.552 06:04:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:02.552 { 00:18:02.552 "uuid": "3ceba30c-141b-452b-81ad-f3d940afe63d", 00:18:02.552 "name": "lvs_0", 00:18:02.552 "base_bdev": "Nvme0n1", 00:18:02.552 "total_data_clusters": 4, 00:18:02.552 "free_clusters": 4, 00:18:02.552 "block_size": 4096, 00:18:02.552 "cluster_size": 1073741824 00:18:02.552 } 00:18:02.552 ]' 00:18:02.552 06:04:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3ceba30c-141b-452b-81ad-f3d940afe63d") .free_clusters' 00:18:02.552 06:04:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:18:02.552 06:04:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="3ceba30c-141b-452b-81ad-f3d940afe63d") .cluster_size' 00:18:02.552 4096 00:18:02.552 06:04:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:18:02.552 06:04:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:18:02.552 06:04:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:18:02.552 06:04:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:18:02.811 fa1d7598-a59c-4f4b-a9b3-1813c6477f41 00:18:02.811 06:04:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:18:03.102 06:04:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:18:03.423 06:04:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:03.683 06:04:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:03.683 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:03.683 fio-3.35 00:18:03.683 Starting 1 thread 00:18:06.215 00:18:06.215 test: (groupid=0, jobs=1): err= 0: pid=89426: Sat Jul 13 06:04:57 2024 00:18:06.215 read: IOPS=6341, BW=24.8MiB/s (26.0MB/s)(49.7MiB/2008msec) 00:18:06.215 slat (usec): min=2, max=323, avg= 2.58, stdev= 3.57 00:18:06.215 clat (usec): min=2829, max=18688, avg=10553.78, stdev=871.39 00:18:06.215 lat (usec): min=2838, max=18691, avg=10556.35, stdev=871.05 00:18:06.215 clat percentiles (usec): 00:18:06.215 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896], 00:18:06.215 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:18:06.215 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:18:06.215 | 99.00th=[12387], 99.50th=[12780], 99.90th=[16909], 99.95th=[17171], 00:18:06.215 | 99.99th=[18220] 00:18:06.215 bw ( KiB/s): min=24360, max=25864, per=99.81%, avg=25316.00, stdev=666.95, samples=4 00:18:06.215 iops : min= 6090, max= 6466, avg=6329.00, stdev=166.74, samples=4 00:18:06.215 write: IOPS=6339, BW=24.8MiB/s (26.0MB/s)(49.7MiB/2008msec); 0 zone resets 00:18:06.215 slat (usec): min=2, max=241, avg= 2.71, stdev= 2.35 00:18:06.215 clat (usec): min=2327, max=18050, avg=9568.78, stdev=840.47 00:18:06.215 lat (usec): min=2341, max=18052, avg=9571.49, stdev=840.32 00:18:06.215 clat percentiles (usec): 00:18:06.215 | 1.00th=[ 7832], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8979], 00:18:06.215 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:18:06.215 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:18:06.215 | 99.00th=[11338], 99.50th=[11731], 99.90th=[15795], 99.95th=[17695], 00:18:06.215 | 99.99th=[17957] 00:18:06.215 bw ( KiB/s): min=25152, max=25416, per=99.78%, avg=25304.50, stdev=111.43, samples=4 00:18:06.215 iops : min= 6288, max= 6354, avg=6326.00, stdev=27.86, samples=4 00:18:06.215 lat (msec) : 4=0.09%, 10=48.01%, 20=51.89% 00:18:06.215 cpu : usr=72.20%, sys=21.62%, ctx=5, majf=0, minf=7 00:18:06.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:06.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:06.215 issued rwts: total=12733,12730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.215 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:06.215 00:18:06.215 Run status group 0 (all jobs): 00:18:06.215 READ: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=49.7MiB (52.2MB), run=2008-2008msec 00:18:06.215 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=49.7MiB (52.1MB), run=2008-2008msec 00:18:06.215 06:04:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:06.473 06:04:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:18:06.732 06:04:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=ee2dbf5c-cd86-4913-83da-470e28ff4965 00:18:06.732 06:04:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb ee2dbf5c-cd86-4913-83da-470e28ff4965 00:18:06.732 06:04:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=ee2dbf5c-cd86-4913-83da-470e28ff4965 00:18:06.732 06:04:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:06.732 06:04:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:06.732 06:04:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:06.732 06:04:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:06.991 06:04:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:06.991 { 00:18:06.991 "uuid": "3ceba30c-141b-452b-81ad-f3d940afe63d", 00:18:06.991 "name": "lvs_0", 00:18:06.991 "base_bdev": "Nvme0n1", 00:18:06.991 "total_data_clusters": 4, 00:18:06.991 "free_clusters": 0, 00:18:06.991 "block_size": 4096, 00:18:06.991 "cluster_size": 1073741824 00:18:06.991 }, 00:18:06.991 { 00:18:06.991 "uuid": "ee2dbf5c-cd86-4913-83da-470e28ff4965", 00:18:06.991 "name": "lvs_n_0", 00:18:06.991 "base_bdev": "fa1d7598-a59c-4f4b-a9b3-1813c6477f41", 00:18:06.991 "total_data_clusters": 1022, 00:18:06.991 "free_clusters": 1022, 00:18:06.991 "block_size": 4096, 00:18:06.991 "cluster_size": 4194304 00:18:06.991 } 00:18:06.991 ]' 00:18:06.991 06:04:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="ee2dbf5c-cd86-4913-83da-470e28ff4965") .free_clusters' 00:18:06.991 06:04:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:18:06.991 06:04:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="ee2dbf5c-cd86-4913-83da-470e28ff4965") .cluster_size' 00:18:06.991 06:04:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:06.991 4088 00:18:06.991 06:04:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:18:06.991 06:04:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:18:06.991 06:04:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:18:07.249 afc3af03-f573-43bf-8567-99126de960b8 00:18:07.250 06:04:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:18:07.508 06:04:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:18:07.766 06:04:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:08.025 06:04:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:08.297 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:08.297 fio-3.35 00:18:08.297 Starting 1 thread 00:18:10.832 00:18:10.832 test: (groupid=0, jobs=1): err= 0: pid=89510: Sat Jul 13 06:05:02 2024 00:18:10.832 read: IOPS=5641, BW=22.0MiB/s (23.1MB/s)(44.3MiB/2010msec) 00:18:10.832 slat (nsec): min=1946, max=277195, avg=2682.54, stdev=3527.73 00:18:10.832 clat (usec): min=3212, max=20456, avg=11916.36, stdev=996.78 00:18:10.832 lat (usec): min=3242, max=20458, avg=11919.04, stdev=996.47 00:18:10.832 clat percentiles (usec): 00:18:10.832 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 00:18:10.832 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:18:10.832 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[13435], 00:18:10.832 | 99.00th=[14091], 99.50th=[14484], 99.90th=[18744], 99.95th=[20317], 00:18:10.832 | 99.99th=[20317] 00:18:10.832 bw ( KiB/s): min=21744, max=23120, per=99.97%, avg=22558.00, stdev=580.74, samples=4 00:18:10.833 iops : min= 5436, max= 5780, avg=5639.50, stdev=145.18, samples=4 00:18:10.833 write: IOPS=5610, BW=21.9MiB/s (23.0MB/s)(44.1MiB/2010msec); 0 zone resets 00:18:10.833 slat (usec): min=2, max=205, avg= 2.81, stdev= 2.57 00:18:10.833 clat (usec): min=2135, max=18993, avg=10762.64, stdev=959.69 00:18:10.833 lat (usec): min=2147, max=18996, avg=10765.45, stdev=959.53 00:18:10.833 clat percentiles (usec): 00:18:10.833 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:18:10.833 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:18:10.833 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:18:10.833 | 99.00th=[12911], 99.50th=[13304], 99.90th=[18220], 99.95th=[18744], 00:18:10.833 | 99.99th=[19006] 00:18:10.833 bw ( KiB/s): min=22072, max=22688, per=99.90%, avg=22422.00, stdev=276.62, samples=4 00:18:10.833 iops : min= 5518, max= 5672, avg=5605.50, stdev=69.15, samples=4 00:18:10.833 lat (msec) : 4=0.07%, 10=9.60%, 20=90.30%, 50=0.03% 00:18:10.833 cpu : usr=73.62%, sys=20.86%, ctx=3, majf=0, minf=7 00:18:10.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:18:10.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.833 issued rwts: total=11339,11278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.833 00:18:10.833 Run status group 0 (all jobs): 00:18:10.833 READ: bw=22.0MiB/s (23.1MB/s), 22.0MiB/s-22.0MiB/s (23.1MB/s-23.1MB/s), io=44.3MiB (46.4MB), run=2010-2010msec 00:18:10.833 WRITE: bw=21.9MiB/s (23.0MB/s), 21.9MiB/s-21.9MiB/s (23.0MB/s-23.0MB/s), io=44.1MiB (46.2MB), run=2010-2010msec 00:18:10.833 06:05:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:10.833 06:05:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:18:10.833 06:05:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:18:11.091 06:05:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:11.349 06:05:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:18:11.607 06:05:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:11.865 06:05:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:12.801 rmmod nvme_tcp 00:18:12.801 rmmod nvme_fabrics 00:18:12.801 rmmod nvme_keyring 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 89204 ']' 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 89204 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 89204 ']' 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 89204 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89204 00:18:12.801 killing process with pid 89204 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89204' 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 89204 00:18:12.801 06:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 89204 00:18:13.061 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:13.061 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:13.061 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:13.062 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.062 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.062 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.062 06:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.062 06:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.062 06:05:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:13.062 00:18:13.062 real 0m18.860s 00:18:13.062 user 1m23.016s 00:18:13.062 sys 0m4.358s 00:18:13.062 06:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:13.062 06:05:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.062 ************************************ 00:18:13.062 END TEST nvmf_fio_host 00:18:13.062 ************************************ 00:18:13.062 06:05:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:13.062 06:05:04 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:13.062 06:05:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:13.062 06:05:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.062 06:05:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:13.062 ************************************ 00:18:13.062 START TEST nvmf_failover 00:18:13.062 ************************************ 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:13.062 * Looking for test storage... 00:18:13.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:13.062 Cannot find device "nvmf_tgt_br" 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.062 Cannot find device "nvmf_tgt_br2" 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:13.062 Cannot find device "nvmf_tgt_br" 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:13.062 Cannot find device "nvmf_tgt_br2" 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:18:13.062 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:13.321 06:05:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:13.321 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:13.321 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:13.321 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:13.321 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:13.321 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:13.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:18:13.322 00:18:13.322 --- 10.0.0.2 ping statistics --- 00:18:13.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.322 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:18:13.322 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:13.322 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:13.322 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:18:13.322 00:18:13.322 --- 10.0.0.3 ping statistics --- 00:18:13.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.322 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:13.322 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:13.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:18:13.580 00:18:13.580 --- 10.0.0.1 ping statistics --- 00:18:13.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.580 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:13.580 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.580 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:18:13.580 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:13.580 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.580 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=89743 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 89743 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 89743 ']' 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.581 06:05:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:13.581 [2024-07-13 06:05:05.125617] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:13.581 [2024-07-13 06:05:05.125719] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.581 [2024-07-13 06:05:05.263549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:13.581 [2024-07-13 06:05:05.300681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.581 [2024-07-13 06:05:05.300966] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.581 [2024-07-13 06:05:05.301125] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.581 [2024-07-13 06:05:05.301242] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.581 [2024-07-13 06:05:05.301280] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.581 [2024-07-13 06:05:05.301509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.581 [2024-07-13 06:05:05.301647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:13.581 [2024-07-13 06:05:05.301652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.840 [2024-07-13 06:05:05.331868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:13.840 06:05:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.840 06:05:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:13.840 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:13.840 06:05:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:13.840 06:05:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:13.840 06:05:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.840 06:05:05 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:14.100 [2024-07-13 06:05:05.675343] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.100 06:05:05 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:14.359 Malloc0 00:18:14.359 06:05:05 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:14.618 06:05:06 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:14.876 06:05:06 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.134 [2024-07-13 06:05:06.729699] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.134 06:05:06 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:15.392 [2024-07-13 06:05:06.974010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:15.392 06:05:06 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:15.650 [2024-07-13 06:05:07.230244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:15.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.650 06:05:07 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=89793 00:18:15.650 06:05:07 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:15.650 06:05:07 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 89793 /var/tmp/bdevperf.sock 00:18:15.650 06:05:07 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:15.650 06:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 89793 ']' 00:18:15.650 06:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.650 06:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.650 06:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.650 06:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.650 06:05:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:16.592 06:05:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.592 06:05:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:16.592 06:05:08 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:16.862 NVMe0n1 00:18:16.862 06:05:08 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:17.121 00:18:17.121 06:05:08 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=89817 00:18:17.121 06:05:08 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:17.121 06:05:08 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:18.495 06:05:09 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.495 [2024-07-13 06:05:10.103785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.103997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.495 [2024-07-13 06:05:10.104678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 [2024-07-13 06:05:10.104937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c28d0 is same with the state(5) to be set 00:18:18.496 06:05:10 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:21.775 06:05:13 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:21.775 00:18:21.775 06:05:13 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:22.033 06:05:13 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:25.316 06:05:16 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.316 [2024-07-13 06:05:16.972931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.316 06:05:17 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:26.693 06:05:18 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:26.693 06:05:18 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 89817 00:18:33.269 0 00:18:33.269 06:05:23 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 89793 00:18:33.269 06:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 89793 ']' 00:18:33.269 06:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 89793 00:18:33.269 06:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:33.269 06:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.269 06:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89793 00:18:33.269 killing process with pid 89793 00:18:33.269 06:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:33.269 06:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:33.269 06:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89793' 00:18:33.269 06:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 89793 00:18:33.269 06:05:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 89793 00:18:33.269 06:05:24 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:33.269 [2024-07-13 06:05:07.294806] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:33.269 [2024-07-13 06:05:07.294907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89793 ] 00:18:33.269 [2024-07-13 06:05:07.433164] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.269 [2024-07-13 06:05:07.475768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.269 [2024-07-13 06:05:07.510681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:33.269 Running I/O for 15 seconds... 00:18:33.269 [2024-07-13 06:05:10.103984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.269 [2024-07-13 06:05:10.104036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.269 [2024-07-13 06:05:10.104069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.269 [2024-07-13 06:05:10.104098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.269 [2024-07-13 06:05:10.104128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.269 [2024-07-13 06:05:10.104141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.269 [2024-07-13 06:05:10.104155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.269 [2024-07-13 06:05:10.104184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.269 [2024-07-13 06:05:10.104214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x757f60 is same with the state(5) to be set 00:18:33.269 [2024-07-13 06:05:10.104993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.269 [2024-07-13 06:05:10.105022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.269 [2024-07-13 06:05:10.105046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.269 [2024-07-13 06:05:10.105076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.269 [2024-07-13 06:05:10.105092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.269 [2024-07-13 06:05:10.105123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.105983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.105997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.270 [2024-07-13 06:05:10.106577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.270 [2024-07-13 06:05:10.106593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.106607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.106623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.106637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.106653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.106667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.106682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.106696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.106712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.106725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.106741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.106755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.106771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.106785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.106801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.106815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.106831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.106845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.106861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.106875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.106890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.106904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.106921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.106941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.106958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.106972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.107981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.107996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.108012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.108025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.108041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.108055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.108070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.108084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.108100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.108114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.271 [2024-07-13 06:05:10.108129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.271 [2024-07-13 06:05:10.108143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.108861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.108891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.108921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.108950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.108980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.108995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.109009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.109025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.109038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.109064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.109078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.109093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.109108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.109123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.109143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.109160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.109174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.109190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.109206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.109222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.109236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.109252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.109266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.109282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.109295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.109311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.272 [2024-07-13 06:05:10.109325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.109340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.272 [2024-07-13 06:05:10.109354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.109379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x779200 is same with the state(5) to be set 00:18:33.272 [2024-07-13 06:05:10.109397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.272 [2024-07-13 06:05:10.109408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.272 [2024-07-13 06:05:10.109420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59552 len:8 PRP1 0x0 PRP2 0x0 00:18:33.272 [2024-07-13 06:05:10.109434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.272 [2024-07-13 06:05:10.109487] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x779200 was disconnected and freed. reset controller. 00:18:33.272 [2024-07-13 06:05:10.109506] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:33.272 [2024-07-13 06:05:10.109521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:33.273 [2024-07-13 06:05:10.113562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:33.273 [2024-07-13 06:05:10.113601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x757f60 (9): Bad file descriptor 00:18:33.273 [2024-07-13 06:05:10.152348] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:33.273 [2024-07-13 06:05:13.707914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.273 [2024-07-13 06:05:13.708889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.708966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.708980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.709006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.273 [2024-07-13 06:05:13.709025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.273 [2024-07-13 06:05:13.709041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.709056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.709086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.709116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.709146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.709176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.709206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.709244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.709275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.709337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.709367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.709871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.709901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.709931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.709960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.709984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.709998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.710014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.710037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.710057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.710078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.710095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.710110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.710126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.710140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.710156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.710170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.710185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.710200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.710215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.710229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.710245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.710259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.710275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.710289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.710304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.274 [2024-07-13 06:05:13.710318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.710334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.710349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.710365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.710391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.274 [2024-07-13 06:05:13.710407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.274 [2024-07-13 06:05:13.710422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.710451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.710489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.710520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.710551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.710590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.710619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.710650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.710680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.710709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.710740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.710769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.710799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.710829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.710858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.710896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.710926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.710956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.710972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.710986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.711016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.711045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.275 [2024-07-13 06:05:13.711075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.275 [2024-07-13 06:05:13.711548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x752250 is same with the state(5) to be set 00:18:33.275 [2024-07-13 06:05:13.711580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.275 [2024-07-13 06:05:13.711591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.275 [2024-07-13 06:05:13.711602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47680 len:8 PRP1 0x0 PRP2 0x0 00:18:33.275 [2024-07-13 06:05:13.711616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.275 [2024-07-13 06:05:13.711641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.275 [2024-07-13 06:05:13.711652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48200 len:8 PRP1 0x0 PRP2 0x0 00:18:33.275 [2024-07-13 06:05:13.711673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.275 [2024-07-13 06:05:13.711700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.275 [2024-07-13 06:05:13.711711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48208 len:8 PRP1 0x0 PRP2 0x0 00:18:33.275 [2024-07-13 06:05:13.711724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.275 [2024-07-13 06:05:13.711738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.275 [2024-07-13 06:05:13.711749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.276 [2024-07-13 06:05:13.711760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48216 len:8 PRP1 0x0 PRP2 0x0 00:18:33.276 [2024-07-13 06:05:13.711773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.711787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.276 [2024-07-13 06:05:13.711797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.276 [2024-07-13 06:05:13.711808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48224 len:8 PRP1 0x0 PRP2 0x0 00:18:33.276 [2024-07-13 06:05:13.711822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.711836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.276 [2024-07-13 06:05:13.711847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.276 [2024-07-13 06:05:13.711858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48232 len:8 PRP1 0x0 PRP2 0x0 00:18:33.276 [2024-07-13 06:05:13.711871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.711885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.276 [2024-07-13 06:05:13.711896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.276 [2024-07-13 06:05:13.711907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48240 len:8 PRP1 0x0 PRP2 0x0 00:18:33.276 [2024-07-13 06:05:13.711920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.711935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.276 [2024-07-13 06:05:13.711945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.276 [2024-07-13 06:05:13.711956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48248 len:8 PRP1 0x0 PRP2 0x0 00:18:33.276 [2024-07-13 06:05:13.711969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.711983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.276 [2024-07-13 06:05:13.711993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.276 [2024-07-13 06:05:13.712004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48256 len:8 PRP1 0x0 PRP2 0x0 00:18:33.276 [2024-07-13 06:05:13.712018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.712032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.276 [2024-07-13 06:05:13.712042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.276 [2024-07-13 06:05:13.712059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47688 len:8 PRP1 0x0 PRP2 0x0 00:18:33.276 [2024-07-13 06:05:13.712073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.712088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.276 [2024-07-13 06:05:13.712099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.276 [2024-07-13 06:05:13.712110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47696 len:8 PRP1 0x0 PRP2 0x0 00:18:33.276 [2024-07-13 06:05:13.712123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.712137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.276 [2024-07-13 06:05:13.712148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.276 [2024-07-13 06:05:13.712159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47704 len:8 PRP1 0x0 PRP2 0x0 00:18:33.276 [2024-07-13 06:05:13.712173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.712187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.276 [2024-07-13 06:05:13.712197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.276 [2024-07-13 06:05:13.712208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47712 len:8 PRP1 0x0 PRP2 0x0 00:18:33.276 [2024-07-13 06:05:13.712221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.712235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.276 [2024-07-13 06:05:13.712246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.276 [2024-07-13 06:05:13.712257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47720 len:8 PRP1 0x0 PRP2 0x0 00:18:33.276 [2024-07-13 06:05:13.712270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.712285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.276 [2024-07-13 06:05:13.712295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.276 [2024-07-13 06:05:13.712306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47728 len:8 PRP1 0x0 PRP2 0x0 00:18:33.276 [2024-07-13 06:05:13.712319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.712333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.276 [2024-07-13 06:05:13.712343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.276 [2024-07-13 06:05:13.712354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47736 len:8 PRP1 0x0 PRP2 0x0 00:18:33.276 [2024-07-13 06:05:13.712379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.712395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.276 [2024-07-13 06:05:13.712406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.276 [2024-07-13 06:05:13.712417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47744 len:8 PRP1 0x0 PRP2 0x0 00:18:33.276 [2024-07-13 06:05:13.712431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.712556] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x752250 was disconnected and freed. reset controller. 00:18:33.276 [2024-07-13 06:05:13.712576] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:33.276 [2024-07-13 06:05:13.712634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.276 [2024-07-13 06:05:13.712655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.712671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.276 [2024-07-13 06:05:13.712691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.712707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.276 [2024-07-13 06:05:13.712720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.712735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.276 [2024-07-13 06:05:13.712748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:13.712763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:33.276 [2024-07-13 06:05:13.716727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:33.276 [2024-07-13 06:05:13.716768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x757f60 (9): Bad file descriptor 00:18:33.276 [2024-07-13 06:05:13.768580] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:33.276 [2024-07-13 06:05:18.244962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.276 [2024-07-13 06:05:18.245031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:18.245062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.276 [2024-07-13 06:05:18.245079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:18.245096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.276 [2024-07-13 06:05:18.245110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:18.245125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.276 [2024-07-13 06:05:18.245139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:18.245155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.276 [2024-07-13 06:05:18.245169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:18.245185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.276 [2024-07-13 06:05:18.245199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:18.245215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.276 [2024-07-13 06:05:18.245254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:18.245272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.276 [2024-07-13 06:05:18.245286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:18.245301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.276 [2024-07-13 06:05:18.245315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:18.245331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.276 [2024-07-13 06:05:18.245345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:18.245360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.276 [2024-07-13 06:05:18.245393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:18.245411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.276 [2024-07-13 06:05:18.245425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.276 [2024-07-13 06:05:18.245441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.276 [2024-07-13 06:05:18.245455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.245485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.245514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.245544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.245573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.245606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.245635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.245676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.245707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.245739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.245769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.245799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.245829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.245859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.245890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.245920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.245950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.245980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.245996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.246010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.246060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.246334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.246364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.246407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.246437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.246479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.246509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.246539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.277 [2024-07-13 06:05:18.246579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.277 [2024-07-13 06:05:18.246807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.277 [2024-07-13 06:05:18.246820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.246836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.246857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.246875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.246889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.246905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.246919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.246935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.246949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.246965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.246979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.246995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.278 [2024-07-13 06:05:18.247348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.278 [2024-07-13 06:05:18.247392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.278 [2024-07-13 06:05:18.247423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.278 [2024-07-13 06:05:18.247453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.278 [2024-07-13 06:05:18.247483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.278 [2024-07-13 06:05:18.247513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.278 [2024-07-13 06:05:18.247557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.278 [2024-07-13 06:05:18.247587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.278 [2024-07-13 06:05:18.247848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.278 [2024-07-13 06:05:18.247878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.278 [2024-07-13 06:05:18.247909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.278 [2024-07-13 06:05:18.247939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.278 [2024-07-13 06:05:18.247955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.278 [2024-07-13 06:05:18.247969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.247985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.279 [2024-07-13 06:05:18.247999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.279 [2024-07-13 06:05:18.248030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.279 [2024-07-13 06:05:18.248066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.279 [2024-07-13 06:05:18.248098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.279 [2024-07-13 06:05:18.248129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.279 [2024-07-13 06:05:18.248158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.279 [2024-07-13 06:05:18.248189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.279 [2024-07-13 06:05:18.248218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.279 [2024-07-13 06:05:18.248248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.279 [2024-07-13 06:05:18.248278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.279 [2024-07-13 06:05:18.248313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.279 [2024-07-13 06:05:18.248345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.279 [2024-07-13 06:05:18.248374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.279 [2024-07-13 06:05:18.248443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.279 [2024-07-13 06:05:18.248473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.279 [2024-07-13 06:05:18.248514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.279 [2024-07-13 06:05:18.248542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.279 [2024-07-13 06:05:18.248570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.279 [2024-07-13 06:05:18.248599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x752250 is same with the state(5) to be set 00:18:33.279 [2024-07-13 06:05:18.248629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.279 [2024-07-13 06:05:18.248640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.279 [2024-07-13 06:05:18.248651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111600 len:8 PRP1 0x0 PRP2 0x0 00:18:33.279 [2024-07-13 06:05:18.248664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.279 [2024-07-13 06:05:18.248688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.279 [2024-07-13 06:05:18.248699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112120 len:8 PRP1 0x0 PRP2 0x0 00:18:33.279 [2024-07-13 06:05:18.248711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.279 [2024-07-13 06:05:18.248740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.279 [2024-07-13 06:05:18.248751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112128 len:8 PRP1 0x0 PRP2 0x0 00:18:33.279 [2024-07-13 06:05:18.248764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.279 [2024-07-13 06:05:18.248787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.279 [2024-07-13 06:05:18.248799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112136 len:8 PRP1 0x0 PRP2 0x0 00:18:33.279 [2024-07-13 06:05:18.248813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.279 [2024-07-13 06:05:18.248836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.279 [2024-07-13 06:05:18.248846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112144 len:8 PRP1 0x0 PRP2 0x0 00:18:33.279 [2024-07-13 06:05:18.248859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.279 [2024-07-13 06:05:18.248890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.279 [2024-07-13 06:05:18.248900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112152 len:8 PRP1 0x0 PRP2 0x0 00:18:33.279 [2024-07-13 06:05:18.248913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.279 [2024-07-13 06:05:18.248936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.279 [2024-07-13 06:05:18.248947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112160 len:8 PRP1 0x0 PRP2 0x0 00:18:33.279 [2024-07-13 06:05:18.248959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.248972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.279 [2024-07-13 06:05:18.248982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.279 [2024-07-13 06:05:18.248992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112168 len:8 PRP1 0x0 PRP2 0x0 00:18:33.279 [2024-07-13 06:05:18.249005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.249035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.279 [2024-07-13 06:05:18.249045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.279 [2024-07-13 06:05:18.249056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112176 len:8 PRP1 0x0 PRP2 0x0 00:18:33.279 [2024-07-13 06:05:18.249069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.249084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.279 [2024-07-13 06:05:18.249094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.279 [2024-07-13 06:05:18.249105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112184 len:8 PRP1 0x0 PRP2 0x0 00:18:33.279 [2024-07-13 06:05:18.249118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.249132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.279 [2024-07-13 06:05:18.249145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.279 [2024-07-13 06:05:18.249156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112192 len:8 PRP1 0x0 PRP2 0x0 00:18:33.279 [2024-07-13 06:05:18.249169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.249183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.279 [2024-07-13 06:05:18.249194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.279 [2024-07-13 06:05:18.249207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112200 len:8 PRP1 0x0 PRP2 0x0 00:18:33.279 [2024-07-13 06:05:18.249221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.249235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.279 [2024-07-13 06:05:18.249245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.279 [2024-07-13 06:05:18.249256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112208 len:8 PRP1 0x0 PRP2 0x0 00:18:33.279 [2024-07-13 06:05:18.249276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.279 [2024-07-13 06:05:18.249291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.279 [2024-07-13 06:05:18.249302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.280 [2024-07-13 06:05:18.249312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112216 len:8 PRP1 0x0 PRP2 0x0 00:18:33.280 [2024-07-13 06:05:18.249326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.280 [2024-07-13 06:05:18.249340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.280 [2024-07-13 06:05:18.249351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.280 [2024-07-13 06:05:18.249362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112224 len:8 PRP1 0x0 PRP2 0x0 00:18:33.280 [2024-07-13 06:05:18.249375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.280 [2024-07-13 06:05:18.249400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.280 [2024-07-13 06:05:18.249411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.280 [2024-07-13 06:05:18.249422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112232 len:8 PRP1 0x0 PRP2 0x0 00:18:33.280 [2024-07-13 06:05:18.249436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.280 [2024-07-13 06:05:18.249450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.280 [2024-07-13 06:05:18.249461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.280 [2024-07-13 06:05:18.249471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112240 len:8 PRP1 0x0 PRP2 0x0 00:18:33.280 [2024-07-13 06:05:18.249486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.280 [2024-07-13 06:05:18.249531] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x752250 was disconnected and freed. reset controller. 00:18:33.280 [2024-07-13 06:05:18.249549] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:33.280 [2024-07-13 06:05:18.249604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.280 [2024-07-13 06:05:18.249625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.280 [2024-07-13 06:05:18.249641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.280 [2024-07-13 06:05:18.249657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.280 [2024-07-13 06:05:18.249672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.280 [2024-07-13 06:05:18.249686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.280 [2024-07-13 06:05:18.249701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.280 [2024-07-13 06:05:18.249715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.280 [2024-07-13 06:05:18.249730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:33.280 [2024-07-13 06:05:18.249804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x757f60 (9): Bad file descriptor 00:18:33.280 [2024-07-13 06:05:18.253922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:33.280 [2024-07-13 06:05:18.290992] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:33.280 00:18:33.280 Latency(us) 00:18:33.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.280 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:33.280 Verification LBA range: start 0x0 length 0x4000 00:18:33.280 NVMe0n1 : 15.01 8718.48 34.06 235.39 0.00 14260.58 651.64 19541.64 00:18:33.280 =================================================================================================================== 00:18:33.280 Total : 8718.48 34.06 235.39 0.00 14260.58 651.64 19541.64 00:18:33.280 Received shutdown signal, test time was about 15.000000 seconds 00:18:33.280 00:18:33.280 Latency(us) 00:18:33.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.280 =================================================================================================================== 00:18:33.280 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:33.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=89989 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 89989 /var/tmp/bdevperf.sock 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 89989 ']' 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:33.280 [2024-07-13 06:05:24.708819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:33.280 [2024-07-13 06:05:24.957154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:33.280 06:05:24 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:33.567 NVMe0n1 00:18:33.567 06:05:25 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:34.135 00:18:34.135 06:05:25 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:34.394 00:18:34.394 06:05:25 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:34.394 06:05:25 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:34.653 06:05:26 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:34.911 06:05:26 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:38.198 06:05:29 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:38.198 06:05:29 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:38.198 06:05:29 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=90064 00:18:38.198 06:05:29 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:38.198 06:05:29 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 90064 00:18:39.135 0 00:18:39.394 06:05:30 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:39.394 [2024-07-13 06:05:24.189239] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:39.394 [2024-07-13 06:05:24.189356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89989 ] 00:18:39.394 [2024-07-13 06:05:24.324822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.394 [2024-07-13 06:05:24.366023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.394 [2024-07-13 06:05:24.398093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:39.394 [2024-07-13 06:05:26.468937] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:39.394 [2024-07-13 06:05:26.469080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.394 [2024-07-13 06:05:26.469106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.394 [2024-07-13 06:05:26.469124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.394 [2024-07-13 06:05:26.469138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.394 [2024-07-13 06:05:26.469152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.394 [2024-07-13 06:05:26.469166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.394 [2024-07-13 06:05:26.469180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.394 [2024-07-13 06:05:26.469194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.394 [2024-07-13 06:05:26.469208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:39.394 [2024-07-13 06:05:26.469267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:39.394 [2024-07-13 06:05:26.469297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ecf60 (9): Bad file descriptor 00:18:39.394 [2024-07-13 06:05:26.474975] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:39.394 Running I/O for 1 seconds... 00:18:39.394 00:18:39.394 Latency(us) 00:18:39.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.394 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:39.394 Verification LBA range: start 0x0 length 0x4000 00:18:39.394 NVMe0n1 : 1.01 7932.28 30.99 0.00 0.00 16050.89 2815.07 14656.23 00:18:39.394 =================================================================================================================== 00:18:39.394 Total : 7932.28 30.99 0.00 0.00 16050.89 2815.07 14656.23 00:18:39.394 06:05:30 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:39.394 06:05:30 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:39.652 06:05:31 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:39.911 06:05:31 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:39.911 06:05:31 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:40.169 06:05:31 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:40.432 06:05:31 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:43.712 06:05:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:43.712 06:05:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:43.712 06:05:35 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 89989 00:18:43.712 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 89989 ']' 00:18:43.712 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 89989 00:18:43.712 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:43.712 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:43.712 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89989 00:18:43.712 killing process with pid 89989 00:18:43.712 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:43.712 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:43.712 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89989' 00:18:43.712 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 89989 00:18:43.712 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 89989 00:18:43.712 06:05:35 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:43.712 06:05:35 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:43.970 06:05:35 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:43.970 06:05:35 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:43.970 06:05:35 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:43.970 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:43.970 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:18:43.970 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.970 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:18:43.970 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.970 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.970 rmmod nvme_tcp 00:18:44.229 rmmod nvme_fabrics 00:18:44.230 rmmod nvme_keyring 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 89743 ']' 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 89743 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 89743 ']' 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 89743 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89743 00:18:44.230 killing process with pid 89743 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89743' 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 89743 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 89743 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.230 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.489 06:05:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:44.490 00:18:44.490 real 0m31.355s 00:18:44.490 user 2m1.942s 00:18:44.490 sys 0m5.548s 00:18:44.490 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:44.490 06:05:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:44.490 ************************************ 00:18:44.490 END TEST nvmf_failover 00:18:44.490 ************************************ 00:18:44.490 06:05:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:44.490 06:05:36 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:44.490 06:05:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:44.490 06:05:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.490 06:05:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:44.490 ************************************ 00:18:44.490 START TEST nvmf_host_discovery 00:18:44.490 ************************************ 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:44.490 * Looking for test storage... 00:18:44.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:44.490 Cannot find device "nvmf_tgt_br" 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.490 Cannot find device "nvmf_tgt_br2" 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:44.490 Cannot find device "nvmf_tgt_br" 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:44.490 Cannot find device "nvmf_tgt_br2" 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:18:44.490 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:44.749 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:44.749 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.749 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:44.749 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.749 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:44.749 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:44.749 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:44.749 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:44.749 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:44.749 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:44.749 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:44.749 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:44.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:18:44.750 00:18:44.750 --- 10.0.0.2 ping statistics --- 00:18:44.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.750 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:44.750 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:44.750 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:18:44.750 00:18:44.750 --- 10.0.0.3 ping statistics --- 00:18:44.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.750 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:44.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:18:44.750 00:18:44.750 --- 10.0.0.1 ping statistics --- 00:18:44.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.750 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:44.750 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:45.010 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:45.010 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:45.010 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:45.010 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.010 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=90326 00:18:45.010 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 90326 00:18:45.010 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 90326 ']' 00:18:45.010 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.010 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.010 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.010 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.010 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.010 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:45.010 [2024-07-13 06:05:36.557335] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:45.010 [2024-07-13 06:05:36.557477] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.010 [2024-07-13 06:05:36.703928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.268 [2024-07-13 06:05:36.751882] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.268 [2024-07-13 06:05:36.752181] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.268 [2024-07-13 06:05:36.752280] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.268 [2024-07-13 06:05:36.752403] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.268 [2024-07-13 06:05:36.752516] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.268 [2024-07-13 06:05:36.752656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.268 [2024-07-13 06:05:36.791468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.268 [2024-07-13 06:05:36.888159] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.268 [2024-07-13 06:05:36.896279] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.268 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.269 null0 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.269 null1 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=90356 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 90356 /tmp/host.sock 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 90356 ']' 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.269 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.269 06:05:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.269 [2024-07-13 06:05:36.984644] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:45.269 [2024-07-13 06:05:36.984756] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90356 ] 00:18:45.527 [2024-07-13 06:05:37.126010] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.527 [2024-07-13 06:05:37.167471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.527 [2024-07-13 06:05:37.199420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:45.527 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.527 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:45.528 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:45.528 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:45.528 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.528 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:45.786 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:46.044 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.045 [2024-07-13 06:05:37.648747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:46.045 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:18:46.304 06:05:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:18:46.563 [2024-07-13 06:05:38.271323] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:46.563 [2024-07-13 06:05:38.271363] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:46.563 [2024-07-13 06:05:38.271408] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:46.563 [2024-07-13 06:05:38.277415] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:46.822 [2024-07-13 06:05:38.334754] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:46.822 [2024-07-13 06:05:38.334808] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.391 06:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.391 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.651 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.652 [2024-07-13 06:05:39.250797] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:47.652 [2024-07-13 06:05:39.251128] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:47.652 [2024-07-13 06:05:39.251159] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:47.652 [2024-07-13 06:05:39.257155] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:47.652 [2024-07-13 06:05:39.315507] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:47.652 [2024-07-13 06:05:39.315532] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:47.652 [2024-07-13 06:05:39.315540] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:47.652 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.911 [2024-07-13 06:05:39.487507] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:47.911 [2024-07-13 06:05:39.487551] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:47.911 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.912 [2024-07-13 06:05:39.492062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.912 [2024-07-13 06:05:39.492097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.912 [2024-07-13 06:05:39.492127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.912 [2024-07-13 06:05:39.492137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.912 [2024-07-13 06:05:39.492147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.912 [2024-07-13 06:05:39.492156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.912 [2024-07-13 06:05:39.492165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.912 [2024-07-13 06:05:39.492174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.912 [2024-07-13 06:05:39.492184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b11e0 is same with the state(5) to be set 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:47.912 [2024-07-13 06:05:39.493516] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:47.912 [2024-07-13 06:05:39.493538] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:47.912 [2024-07-13 06:05:39.493593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b11e0 (9): Bad file descriptor 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.912 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:48.171 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.172 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.430 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:48.430 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:48.430 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:48.430 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:48.430 06:05:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:48.431 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.431 06:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.386 [2024-07-13 06:05:40.922632] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:49.386 [2024-07-13 06:05:40.922683] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:49.386 [2024-07-13 06:05:40.922702] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:49.386 [2024-07-13 06:05:40.928698] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:49.386 [2024-07-13 06:05:40.989607] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:49.386 [2024-07-13 06:05:40.989666] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:49.386 06:05:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.386 06:05:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:49.386 06:05:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:49.386 06:05:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:49.386 06:05:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:49.386 06:05:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:49.386 06:05:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:49.386 06:05:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:49.386 06:05:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:49.386 06:05:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.386 06:05:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.386 request: 00:18:49.386 { 00:18:49.386 "name": "nvme", 00:18:49.386 "trtype": "tcp", 00:18:49.386 "traddr": "10.0.0.2", 00:18:49.386 "adrfam": "ipv4", 00:18:49.386 "trsvcid": "8009", 00:18:49.386 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:49.386 "wait_for_attach": true, 00:18:49.386 "method": "bdev_nvme_start_discovery", 00:18:49.386 "req_id": 1 00:18:49.386 } 00:18:49.386 Got JSON-RPC error response 00:18:49.386 response: 00:18:49.386 { 00:18:49.386 "code": -17, 00:18:49.386 "message": "File exists" 00:18:49.386 } 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:49.386 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.645 request: 00:18:49.645 { 00:18:49.645 "name": "nvme_second", 00:18:49.645 "trtype": "tcp", 00:18:49.645 "traddr": "10.0.0.2", 00:18:49.645 "adrfam": "ipv4", 00:18:49.645 "trsvcid": "8009", 00:18:49.645 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:49.645 "wait_for_attach": true, 00:18:49.645 "method": "bdev_nvme_start_discovery", 00:18:49.645 "req_id": 1 00:18:49.645 } 00:18:49.645 Got JSON-RPC error response 00:18:49.645 response: 00:18:49.645 { 00:18:49.645 "code": -17, 00:18:49.645 "message": "File exists" 00:18:49.645 } 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.645 06:05:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.582 [2024-07-13 06:05:42.278419] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:50.582 [2024-07-13 06:05:42.278531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6780 with addr=10.0.0.2, port=8010 00:18:50.582 [2024-07-13 06:05:42.278569] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:50.582 [2024-07-13 06:05:42.278580] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:50.582 [2024-07-13 06:05:42.278589] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:51.958 [2024-07-13 06:05:43.278405] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.958 [2024-07-13 06:05:43.278493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6780 with addr=10.0.0.2, port=8010 00:18:51.958 [2024-07-13 06:05:43.278515] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:51.958 [2024-07-13 06:05:43.278526] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:51.958 [2024-07-13 06:05:43.278536] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:52.895 [2024-07-13 06:05:44.278255] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:52.895 request: 00:18:52.895 { 00:18:52.895 "name": "nvme_second", 00:18:52.895 "trtype": "tcp", 00:18:52.895 "traddr": "10.0.0.2", 00:18:52.895 "adrfam": "ipv4", 00:18:52.895 "trsvcid": "8010", 00:18:52.895 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:52.895 "wait_for_attach": false, 00:18:52.895 "attach_timeout_ms": 3000, 00:18:52.895 "method": "bdev_nvme_start_discovery", 00:18:52.895 "req_id": 1 00:18:52.895 } 00:18:52.895 Got JSON-RPC error response 00:18:52.895 response: 00:18:52.895 { 00:18:52.895 "code": -110, 00:18:52.895 "message": "Connection timed out" 00:18:52.895 } 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 90356 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:52.895 rmmod nvme_tcp 00:18:52.895 rmmod nvme_fabrics 00:18:52.895 rmmod nvme_keyring 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 90326 ']' 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 90326 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 90326 ']' 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 90326 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90326 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:52.895 killing process with pid 90326 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90326' 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 90326 00:18:52.895 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 90326 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:53.155 ************************************ 00:18:53.155 END TEST nvmf_host_discovery 00:18:53.155 ************************************ 00:18:53.155 00:18:53.155 real 0m8.661s 00:18:53.155 user 0m17.030s 00:18:53.155 sys 0m1.876s 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.155 06:05:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:53.155 06:05:44 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:53.155 06:05:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:53.155 06:05:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.155 06:05:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:53.155 ************************************ 00:18:53.155 START TEST nvmf_host_multipath_status 00:18:53.155 ************************************ 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:53.155 * Looking for test storage... 00:18:53.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.155 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:53.156 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:53.156 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:53.156 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:53.156 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:53.414 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:53.414 Cannot find device "nvmf_tgt_br" 00:18:53.414 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:18:53.414 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:53.414 Cannot find device "nvmf_tgt_br2" 00:18:53.414 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:18:53.414 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:53.414 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:53.414 Cannot find device "nvmf_tgt_br" 00:18:53.414 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:18:53.414 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:53.414 Cannot find device "nvmf_tgt_br2" 00:18:53.414 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:18:53.414 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:53.414 06:05:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:53.414 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:53.414 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:53.414 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:53.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:18:53.674 00:18:53.674 --- 10.0.0.2 ping statistics --- 00:18:53.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.674 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:53.674 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:53.674 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:18:53.674 00:18:53.674 --- 10.0.0.3 ping statistics --- 00:18:53.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.674 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:53.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:53.674 00:18:53.674 --- 10.0.0.1 ping statistics --- 00:18:53.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.674 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=90805 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 90805 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 90805 ']' 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.674 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:53.675 [2024-07-13 06:05:45.301298] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:53.675 [2024-07-13 06:05:45.301423] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.938 [2024-07-13 06:05:45.444431] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:53.938 [2024-07-13 06:05:45.491688] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.938 [2024-07-13 06:05:45.491753] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.938 [2024-07-13 06:05:45.491766] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.938 [2024-07-13 06:05:45.491776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.938 [2024-07-13 06:05:45.491785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.938 [2024-07-13 06:05:45.491924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.938 [2024-07-13 06:05:45.491943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.938 [2024-07-13 06:05:45.533210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:53.938 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.938 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:53.938 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.938 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.938 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:53.938 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.938 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=90805 00:18:53.938 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:54.196 [2024-07-13 06:05:45.897794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.196 06:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:54.764 Malloc0 00:18:54.764 06:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:55.023 06:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:55.282 06:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.541 [2024-07-13 06:05:47.032043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.541 06:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:55.801 [2024-07-13 06:05:47.284271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:55.801 06:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=90848 00:18:55.801 06:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:55.801 06:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:55.801 06:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 90848 /var/tmp/bdevperf.sock 00:18:55.801 06:05:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 90848 ']' 00:18:55.801 06:05:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.801 06:05:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.801 06:05:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.801 06:05:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.801 06:05:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:56.738 06:05:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:56.738 06:05:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:56.738 06:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:56.997 06:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:57.255 Nvme0n1 00:18:57.255 06:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:57.514 Nvme0n1 00:18:57.773 06:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:57.773 06:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:59.677 06:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:59.677 06:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:59.936 06:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:00.193 06:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:01.127 06:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:01.127 06:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:01.127 06:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.127 06:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:01.386 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.386 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:01.386 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.386 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:01.645 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:01.645 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:01.645 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.645 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:02.212 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.213 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:02.213 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.213 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:02.213 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.213 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:02.213 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.213 06:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:02.471 06:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.471 06:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:02.471 06:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.471 06:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:02.736 06:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.736 06:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:02.736 06:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:03.016 06:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:03.286 06:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:04.218 06:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:04.218 06:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:04.218 06:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.218 06:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:04.476 06:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:04.476 06:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:04.476 06:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.476 06:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:05.042 06:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.042 06:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:05.042 06:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.042 06:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:05.042 06:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.042 06:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:05.042 06:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.042 06:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:05.608 06:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.609 06:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:05.609 06:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.609 06:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:05.609 06:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.609 06:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:05.609 06:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.609 06:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:05.866 06:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.866 06:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:05.866 06:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:06.124 06:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:06.383 06:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:07.754 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:07.754 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:07.754 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.755 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:07.755 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.755 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:07.755 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.755 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:08.012 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:08.012 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:08.012 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.012 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:08.269 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.269 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:08.269 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.269 06:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:08.527 06:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.527 06:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:08.527 06:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:08.527 06:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.785 06:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.785 06:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:08.785 06:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:08.785 06:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.043 06:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.043 06:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:09.043 06:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:09.301 06:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:09.865 06:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:10.801 06:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:10.801 06:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:10.801 06:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.801 06:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:11.060 06:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.060 06:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:11.060 06:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:11.060 06:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.319 06:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:11.319 06:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:11.319 06:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.319 06:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:11.576 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.576 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:11.576 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.576 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:11.834 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.834 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:11.834 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:11.834 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.173 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.173 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:12.173 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.173 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:12.431 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:12.431 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:12.431 06:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:12.690 06:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:12.690 06:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:14.064 06:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:14.064 06:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:14.064 06:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.064 06:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:14.064 06:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:14.064 06:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:14.064 06:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:14.064 06:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.323 06:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:14.323 06:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:14.323 06:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.323 06:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:14.581 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.581 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:14.581 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.581 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:14.840 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.840 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:14.840 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.840 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:15.098 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:15.098 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:15.098 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:15.098 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.356 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:15.356 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:15.356 06:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:15.615 06:06:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:15.874 06:06:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:16.828 06:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:16.828 06:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:16.828 06:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.828 06:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:17.087 06:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:17.087 06:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:17.087 06:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.087 06:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:17.346 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.346 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:17.346 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.346 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:17.913 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.913 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:17.913 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.913 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:17.913 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.913 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:17.914 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.914 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:18.172 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:18.172 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:18.172 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.172 06:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:18.430 06:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.430 06:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:18.997 06:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:18.997 06:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:18.997 06:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:19.254 06:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:20.626 06:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:20.626 06:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:20.626 06:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.626 06:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:20.626 06:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.626 06:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:20.626 06:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:20.626 06:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.883 06:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.883 06:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:20.883 06:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.883 06:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:21.141 06:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.141 06:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:21.141 06:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.141 06:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:21.399 06:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.399 06:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:21.399 06:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.399 06:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:21.657 06:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.657 06:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:21.657 06:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.657 06:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:21.914 06:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.914 06:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:21.914 06:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:22.172 06:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:22.429 06:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:23.411 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:23.411 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:23.411 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.411 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:23.669 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:23.669 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:23.669 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.669 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:23.927 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.927 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:23.927 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.927 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:24.185 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.185 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:24.185 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.185 06:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:24.443 06:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.443 06:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:24.443 06:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.443 06:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:24.701 06:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.701 06:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:24.701 06:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.701 06:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:25.266 06:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.266 06:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:25.266 06:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:25.266 06:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:25.523 06:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:26.899 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:26.899 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:26.899 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:26.899 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:26.899 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:26.899 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:26.899 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:26.899 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:27.156 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:27.156 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:27.156 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.156 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:27.414 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:27.414 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:27.414 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:27.414 06:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.672 06:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:27.672 06:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:27.672 06:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.672 06:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:27.930 06:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:27.930 06:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:27.930 06:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.930 06:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:28.188 06:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.188 06:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:28.188 06:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:28.446 06:06:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:28.704 06:06:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:29.637 06:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:29.637 06:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:29.637 06:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.637 06:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:29.895 06:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.895 06:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:29.895 06:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:29.895 06:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.153 06:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:30.153 06:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:30.153 06:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.153 06:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:30.719 06:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:30.719 06:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:30.719 06:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:30.719 06:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.719 06:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:30.719 06:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:30.983 06:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.983 06:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:31.300 06:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.300 06:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:31.300 06:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.300 06:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:31.563 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:31.563 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 90848 00:19:31.563 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 90848 ']' 00:19:31.563 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 90848 00:19:31.563 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:19:31.563 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:31.563 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90848 00:19:31.563 killing process with pid 90848 00:19:31.563 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:31.563 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:31.563 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90848' 00:19:31.563 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 90848 00:19:31.563 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 90848 00:19:31.563 Connection closed with partial response: 00:19:31.563 00:19:31.563 00:19:31.563 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 90848 00:19:31.563 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:31.563 [2024-07-13 06:05:47.349414] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:31.564 [2024-07-13 06:05:47.349519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90848 ] 00:19:31.564 [2024-07-13 06:05:47.489556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.564 [2024-07-13 06:05:47.530092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.564 [2024-07-13 06:05:47.564441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:31.564 Running I/O for 90 seconds... 00:19:31.564 [2024-07-13 06:06:04.155843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.155915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.155974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.155996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.564 [2024-07-13 06:06:04.156260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.564 [2024-07-13 06:06:04.156298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.564 [2024-07-13 06:06:04.156352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.564 [2024-07-13 06:06:04.156409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.564 [2024-07-13 06:06:04.156446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.564 [2024-07-13 06:06:04.156483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.564 [2024-07-13 06:06:04.156521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.564 [2024-07-13 06:06:04.156558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.156966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.156989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.157004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.157026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.157041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.157063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.157078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.157100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.157115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.157137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.157152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.157174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.157189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.157211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.157226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.157249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.157264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.157286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.157301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.157332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.157347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.157381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.157400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.157423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.157438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.157460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.564 [2024-07-13 06:06:04.157475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:31.564 [2024-07-13 06:06:04.157498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.157513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.157535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.157550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.157572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.157588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.157610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.157625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.157647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.157662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.157684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.157700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.157722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.157737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.157760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.157776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.157807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.157823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.157862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.157882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.157904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.157920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.157942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.157957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.157979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.157995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.158032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.158069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.158120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.158157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.158194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.158232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.158270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.158316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.158356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.158410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.158450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.158488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.158525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.158562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.158598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.158645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.158682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.158719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.158756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.565 [2024-07-13 06:06:04.158801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.158840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.158878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.158915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.158954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.158976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.158991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.159013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.159028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.159050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.159065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.159087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.159102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:31.565 [2024-07-13 06:06:04.159124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.565 [2024-07-13 06:06:04.159140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.159185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.159222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.159266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.159320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.159361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.159417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.159461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.159505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.159542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.159580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.159617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.159654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.159691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.159728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.159765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.159813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.159851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.159888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.159926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.159963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.159985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.160000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.160027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.160044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.160066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.160083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.160106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.160121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.160143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.160159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.160181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.160196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.160218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.160233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.160263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.160279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.160301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.160316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.160339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.160354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.161125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.566 [2024-07-13 06:06:04.161152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.161189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.161206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.161238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.161254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.161284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.161300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.161332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.161348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.161392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.161411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.161442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.161459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.161494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.161510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.161560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.161583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.161615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.161649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.161681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.161698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:31.566 [2024-07-13 06:06:04.161729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.566 [2024-07-13 06:06:04.161745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:04.161775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:04.161791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:04.161822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:04.161838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:04.161869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:04.161885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:04.161916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:04.161932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:04.161967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:04.161984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.250624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.567 [2024-07-13 06:06:20.250695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.250732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.567 [2024-07-13 06:06:20.250750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.250773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.567 [2024-07-13 06:06:20.250788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.250810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.567 [2024-07-13 06:06:20.250825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.250847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.250888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.250923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.250939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.250961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.250976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.250997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.567 [2024-07-13 06:06:20.251087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.567 [2024-07-13 06:06:20.251123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.567 [2024-07-13 06:06:20.251521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.567 [2024-07-13 06:06:20.251558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.567 [2024-07-13 06:06:20.251595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.567 [2024-07-13 06:06:20.251633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.567 [2024-07-13 06:06:20.251669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.567 [2024-07-13 06:06:20.251903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.567 [2024-07-13 06:06:20.251940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.251962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.567 [2024-07-13 06:06:20.251977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:31.567 [2024-07-13 06:06:20.252000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.252016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.252038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.252053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.252074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.252090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.252112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.252137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.252159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.252174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.252196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.252211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.252233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.252248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.252270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.252285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.252307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.252330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.252353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.252380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.252404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.252420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.252442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.252458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.252480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.252495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.253824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.253855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.253883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.253900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.253923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.253938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.253960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.253976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.253998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.254230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.254267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.254305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.254738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.568 [2024-07-13 06:06:20.254775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.568 [2024-07-13 06:06:20.254812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:31.568 [2024-07-13 06:06:20.254834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.254849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.254871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.254886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.254908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.254923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.254945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.254961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.254983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.254998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.257451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.257480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.257508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.257525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.257547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.257563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.257599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.257616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.257638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.257653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.257675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.257690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.257711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.257726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.257748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.257763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.257785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.257800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.257822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.257837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.257859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.257873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.257895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.257910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.257932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.257947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.257969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.257984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.258021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.258066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.258117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.258156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.258194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.258231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.258268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.258305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.258342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.258393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.258455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.258502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.258539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.258586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.258625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.258662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.258700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.258736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.258774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.569 [2024-07-13 06:06:20.258811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.258848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.258885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.569 [2024-07-13 06:06:20.258907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.569 [2024-07-13 06:06:20.258922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.258944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.258960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.258982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.258997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.259034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.259098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.259135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.259171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.259209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.259246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.259284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.259321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.259357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.259412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.259449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.259486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.259524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.259570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.259593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.259608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.261625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.261655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.261683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.261700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.261723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.261738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.261760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.261775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.261797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.261812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.261834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.261850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.261871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.261887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.261909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.261924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.261946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.261961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.261982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.261997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.262046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.262085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.262135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.262173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.262209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.262246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.262283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.262320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.262356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.262412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.262471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.262509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.262567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.570 [2024-07-13 06:06:20.262605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:31.570 [2024-07-13 06:06:20.262627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.570 [2024-07-13 06:06:20.262642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.262664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.262679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.262701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.262716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.262737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.262752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.262774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.262789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.262811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.262826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.262847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.262862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.262884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.262899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.262920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.262935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.262957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.262972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.262994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.263009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.263055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.263092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.263129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.263166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.263203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.263239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.263276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.263317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.263354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.263407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.263444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.263482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.263529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.263566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.263603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.263640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.263676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.263713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.263736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.263751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.266435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.266474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.266502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.266519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.266542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.266557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.266579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.266594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.266616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.266632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.266666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.266683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.266705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.266720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.266742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.266757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.266779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.266794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.266821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.266837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.266859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.571 [2024-07-13 06:06:20.266874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:31.571 [2024-07-13 06:06:20.266895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.571 [2024-07-13 06:06:20.266910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.266932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.572 [2024-07-13 06:06:20.266947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.266969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.266984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.572 [2024-07-13 06:06:20.267021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.572 [2024-07-13 06:06:20.267185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.572 [2024-07-13 06:06:20.267259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.572 [2024-07-13 06:06:20.267423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.267764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.572 [2024-07-13 06:06:20.267800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.572 [2024-07-13 06:06:20.267837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.572 [2024-07-13 06:06:20.267874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.572 [2024-07-13 06:06:20.267911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.572 [2024-07-13 06:06:20.267947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.267969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.572 [2024-07-13 06:06:20.267984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.268006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.572 [2024-07-13 06:06:20.268021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.268042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.268057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.268080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.268095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.268124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.268140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.268162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.268177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.268199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.268214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.268235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.268251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.268272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.268288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:31.572 [2024-07-13 06:06:20.268310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.572 [2024-07-13 06:06:20.268325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.268346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.268361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.268397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.268413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.268436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.268451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.270431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.270461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.270502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.270522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.270545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.270561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.270598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.270616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.270638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.270654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.270675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.270691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.270713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.270728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.270750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.270779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.270818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.270833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.270870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.270900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.270922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.270937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.270959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.270975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.270996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.271011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.271048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.271085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.271130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.271579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.271625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.271664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.271716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.573 [2024-07-13 06:06:20.271924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.271982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.271997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.272019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.573 [2024-07-13 06:06:20.272034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.573 [2024-07-13 06:06:20.272055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-13 06:06:20.272070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:31.574 [2024-07-13 06:06:20.272092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.574 [2024-07-13 06:06:20.272107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:31.574 Received shutdown signal, test time was about 33.711681 seconds 00:19:31.574 00:19:31.574 Latency(us) 00:19:31.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.574 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:31.574 Verification LBA range: start 0x0 length 0x4000 00:19:31.574 Nvme0n1 : 33.71 8120.43 31.72 0.00 0.00 15727.55 1087.30 4026531.84 00:19:31.574 =================================================================================================================== 00:19:31.574 Total : 8120.43 31.72 0.00 0.00 15727.55 1087.30 4026531.84 00:19:31.574 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.831 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:31.831 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:31.831 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:31.831 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:31.831 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:19:31.831 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:31.831 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:19:31.831 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:31.831 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:32.090 rmmod nvme_tcp 00:19:32.090 rmmod nvme_fabrics 00:19:32.090 rmmod nvme_keyring 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 90805 ']' 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 90805 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 90805 ']' 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 90805 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90805 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90805' 00:19:32.090 killing process with pid 90805 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 90805 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 90805 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:32.090 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:32.091 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.350 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:32.350 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.350 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.350 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.350 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:32.350 00:19:32.350 real 0m39.113s 00:19:32.350 user 2m7.221s 00:19:32.350 sys 0m11.666s 00:19:32.350 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:32.350 06:06:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:32.350 ************************************ 00:19:32.350 END TEST nvmf_host_multipath_status 00:19:32.350 ************************************ 00:19:32.350 06:06:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:32.350 06:06:23 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:32.350 06:06:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:32.350 06:06:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:32.350 06:06:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:32.350 ************************************ 00:19:32.350 START TEST nvmf_discovery_remove_ifc 00:19:32.350 ************************************ 00:19:32.350 06:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:32.350 * Looking for test storage... 00:19:32.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:32.350 06:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:32.350 06:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:32.350 06:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.350 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:32.351 Cannot find device "nvmf_tgt_br" 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:19:32.351 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:32.610 Cannot find device "nvmf_tgt_br2" 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:32.610 Cannot find device "nvmf_tgt_br" 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:32.610 Cannot find device "nvmf_tgt_br2" 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:32.610 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:32.610 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:32.610 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:32.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:19:32.870 00:19:32.870 --- 10.0.0.2 ping statistics --- 00:19:32.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.870 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:32.870 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:32.870 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:19:32.870 00:19:32.870 --- 10.0.0.3 ping statistics --- 00:19:32.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.870 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:32.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:32.870 00:19:32.870 --- 10.0.0.1 ping statistics --- 00:19:32.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.870 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=91628 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 91628 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 91628 ']' 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.870 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:32.870 [2024-07-13 06:06:24.472631] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:32.870 [2024-07-13 06:06:24.473396] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.130 [2024-07-13 06:06:24.615035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.130 [2024-07-13 06:06:24.660067] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.130 [2024-07-13 06:06:24.660140] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.130 [2024-07-13 06:06:24.660164] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.130 [2024-07-13 06:06:24.660174] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.130 [2024-07-13 06:06:24.660183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.130 [2024-07-13 06:06:24.660210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.130 [2024-07-13 06:06:24.695514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:33.130 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.130 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:19:33.130 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:33.130 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:33.130 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.130 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.130 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:33.130 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.130 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.130 [2024-07-13 06:06:24.798138] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.130 [2024-07-13 06:06:24.806290] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:33.130 null0 00:19:33.130 [2024-07-13 06:06:24.838199] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.389 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.389 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91658 00:19:33.389 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:33.389 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91658 /tmp/host.sock 00:19:33.389 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 91658 ']' 00:19:33.389 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:19:33.389 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.389 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:33.389 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:33.389 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.389 06:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.389 [2024-07-13 06:06:24.918905] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:33.389 [2024-07-13 06:06:24.919015] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91658 ] 00:19:33.389 [2024-07-13 06:06:25.061155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.389 [2024-07-13 06:06:25.107047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.648 06:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.648 06:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:19:33.649 06:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:33.649 06:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:33.649 06:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.649 06:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.649 06:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.649 06:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:33.649 06:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.649 06:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.649 [2024-07-13 06:06:25.223902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:33.649 06:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.649 06:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:33.649 06:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.649 06:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:34.586 [2024-07-13 06:06:26.261341] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:34.586 [2024-07-13 06:06:26.261393] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:34.586 [2024-07-13 06:06:26.261411] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:34.586 [2024-07-13 06:06:26.267401] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:34.845 [2024-07-13 06:06:26.324625] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:34.845 [2024-07-13 06:06:26.324692] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:34.845 [2024-07-13 06:06:26.324722] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:34.845 [2024-07-13 06:06:26.324742] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:34.845 [2024-07-13 06:06:26.324768] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.845 [2024-07-13 06:06:26.330335] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1766ae0 was disconnected and freed. delete nvme_qpair. 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:34.845 06:06:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:35.783 06:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:35.783 06:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.783 06:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:35.783 06:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.783 06:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:35.783 06:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:35.783 06:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:35.783 06:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.042 06:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:36.042 06:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:36.980 06:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:36.980 06:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.980 06:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:36.980 06:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.980 06:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:36.980 06:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:36.980 06:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:36.980 06:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.980 06:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:36.980 06:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:37.916 06:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:37.916 06:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.916 06:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.916 06:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:37.916 06:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:37.916 06:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:37.916 06:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:37.916 06:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.175 06:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:38.175 06:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:39.110 06:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:39.110 06:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:39.110 06:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.110 06:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:39.110 06:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:39.110 06:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:39.110 06:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:39.110 06:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.110 06:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:39.110 06:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:40.047 06:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:40.047 06:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.047 06:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:40.047 06:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:40.047 06:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.047 06:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:40.047 06:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:40.047 [2024-07-13 06:06:31.753002] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:40.047 [2024-07-13 06:06:31.753068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.047 [2024-07-13 06:06:31.753085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.047 [2024-07-13 06:06:31.753098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.047 [2024-07-13 06:06:31.753108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.047 [2024-07-13 06:06:31.753118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.047 [2024-07-13 06:06:31.753127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.048 [2024-07-13 06:06:31.753137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.048 [2024-07-13 06:06:31.753146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.048 [2024-07-13 06:06:31.753156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.048 [2024-07-13 06:06:31.753165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.048 [2024-07-13 06:06:31.753174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172ae50 is same with the state(5) to be set 00:19:40.048 [2024-07-13 06:06:31.762995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172ae50 (9): Bad file descriptor 00:19:40.048 [2024-07-13 06:06:31.773018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:40.306 06:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.306 06:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:40.306 06:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:41.242 [2024-07-13 06:06:32.795538] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:19:41.242 [2024-07-13 06:06:32.795686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172ae50 with addr=10.0.0.2, port=4420 00:19:41.242 [2024-07-13 06:06:32.795722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172ae50 is same with the state(5) to be set 00:19:41.242 [2024-07-13 06:06:32.795815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172ae50 (9): Bad file descriptor 00:19:41.242 [2024-07-13 06:06:32.795949] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:41.242 [2024-07-13 06:06:32.795990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:41.242 [2024-07-13 06:06:32.796020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:41.242 [2024-07-13 06:06:32.796053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:41.242 [2024-07-13 06:06:32.796096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:41.242 [2024-07-13 06:06:32.796130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:41.242 06:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:41.242 06:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.242 06:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:41.242 06:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.242 06:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:41.242 06:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:41.242 06:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:41.242 06:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.242 06:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:41.242 06:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:42.181 [2024-07-13 06:06:33.796192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:42.181 [2024-07-13 06:06:33.796278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:42.181 [2024-07-13 06:06:33.796291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:42.181 [2024-07-13 06:06:33.796300] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:19:42.181 [2024-07-13 06:06:33.796323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.181 [2024-07-13 06:06:33.796354] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:19:42.181 [2024-07-13 06:06:33.796437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.181 [2024-07-13 06:06:33.796454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.181 [2024-07-13 06:06:33.796471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.181 [2024-07-13 06:06:33.796480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.181 [2024-07-13 06:06:33.796491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.181 [2024-07-13 06:06:33.796499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.181 [2024-07-13 06:06:33.796509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.181 [2024-07-13 06:06:33.796518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.181 [2024-07-13 06:06:33.796529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.181 [2024-07-13 06:06:33.796538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.181 [2024-07-13 06:06:33.796547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:42.181 [2024-07-13 06:06:33.796600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172a440 (9): Bad file descriptor 00:19:42.181 [2024-07-13 06:06:33.797594] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:42.181 [2024-07-13 06:06:33.797616] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:19:42.181 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:42.181 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.181 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:42.181 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.181 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:42.181 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:42.181 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:42.181 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.441 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:42.441 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:42.441 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:42.441 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:42.441 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:42.441 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.441 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:42.441 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:42.441 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.441 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:42.441 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:42.441 06:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.441 06:06:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:42.441 06:06:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:43.375 06:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:43.375 06:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.375 06:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:43.375 06:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.375 06:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:43.375 06:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:43.375 06:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:43.375 06:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.375 06:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:43.375 06:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:44.365 [2024-07-13 06:06:35.807287] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:44.365 [2024-07-13 06:06:35.807318] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:44.365 [2024-07-13 06:06:35.807337] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:44.365 [2024-07-13 06:06:35.813360] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:19:44.365 [2024-07-13 06:06:35.870038] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:44.365 [2024-07-13 06:06:35.870132] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:44.365 [2024-07-13 06:06:35.870165] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:44.365 [2024-07-13 06:06:35.870187] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:19:44.365 [2024-07-13 06:06:35.870197] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:44.365 [2024-07-13 06:06:35.876009] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1776f80 was disconnected and freed. delete nvme_qpair. 00:19:44.365 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:44.365 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.365 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:44.366 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.366 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.366 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:44.366 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91658 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 91658 ']' 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 91658 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91658 00:19:44.624 killing process with pid 91658 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91658' 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 91658 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 91658 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:44.624 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:44.883 rmmod nvme_tcp 00:19:44.883 rmmod nvme_fabrics 00:19:44.883 rmmod nvme_keyring 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 91628 ']' 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 91628 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 91628 ']' 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 91628 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91628 00:19:44.883 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:44.884 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:44.884 killing process with pid 91628 00:19:44.884 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91628' 00:19:44.884 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 91628 00:19:44.884 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 91628 00:19:45.143 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:45.143 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:45.143 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:45.143 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.143 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:45.143 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.143 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.143 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.143 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:45.143 00:19:45.143 real 0m12.797s 00:19:45.143 user 0m22.199s 00:19:45.143 sys 0m2.353s 00:19:45.143 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.143 06:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:45.143 ************************************ 00:19:45.143 END TEST nvmf_discovery_remove_ifc 00:19:45.143 ************************************ 00:19:45.143 06:06:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:45.143 06:06:36 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:45.143 06:06:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:45.143 06:06:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.143 06:06:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:45.143 ************************************ 00:19:45.143 START TEST nvmf_identify_kernel_target 00:19:45.143 ************************************ 00:19:45.143 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:45.143 * Looking for test storage... 00:19:45.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:45.143 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:45.143 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:45.143 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.143 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.143 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.143 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.143 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.143 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.143 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.143 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.144 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:45.403 Cannot find device "nvmf_tgt_br" 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:45.403 Cannot find device "nvmf_tgt_br2" 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:45.403 Cannot find device "nvmf_tgt_br" 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:45.403 Cannot find device "nvmf_tgt_br2" 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:45.403 06:06:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:45.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:45.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:45.403 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:45.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:19:45.662 00:19:45.662 --- 10.0.0.2 ping statistics --- 00:19:45.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.662 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:45.662 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:45.662 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:19:45.662 00:19:45.662 --- 10.0.0.3 ping statistics --- 00:19:45.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.662 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:45.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:45.662 00:19:45.662 --- 10.0.0.1 ping statistics --- 00:19:45.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.662 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:45.662 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:45.663 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:45.663 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:45.663 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:45.663 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:45.663 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:45.663 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:45.663 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:45.663 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:45.663 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:19:45.663 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:45.663 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:45.663 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:45.663 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:45.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:45.921 Waiting for block devices as requested 00:19:46.180 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:46.180 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:46.180 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:46.180 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:46.180 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:46.180 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:46.180 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:46.180 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:46.180 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:46.180 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:46.180 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:46.180 No valid GPT data, bailing 00:19:46.180 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:46.438 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:46.438 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:46.438 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:46.438 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:46.438 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:46.438 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:46.438 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:46.438 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:46.438 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:46.438 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:46.438 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:46.438 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:46.438 No valid GPT data, bailing 00:19:46.438 06:06:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:46.438 No valid GPT data, bailing 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:46.438 No valid GPT data, bailing 00:19:46.438 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -a 10.0.0.1 -t tcp -s 4420 00:19:46.697 00:19:46.697 Discovery Log Number of Records 2, Generation counter 2 00:19:46.697 =====Discovery Log Entry 0====== 00:19:46.697 trtype: tcp 00:19:46.697 adrfam: ipv4 00:19:46.697 subtype: current discovery subsystem 00:19:46.697 treq: not specified, sq flow control disable supported 00:19:46.697 portid: 1 00:19:46.697 trsvcid: 4420 00:19:46.697 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:46.697 traddr: 10.0.0.1 00:19:46.697 eflags: none 00:19:46.697 sectype: none 00:19:46.697 =====Discovery Log Entry 1====== 00:19:46.697 trtype: tcp 00:19:46.697 adrfam: ipv4 00:19:46.697 subtype: nvme subsystem 00:19:46.697 treq: not specified, sq flow control disable supported 00:19:46.697 portid: 1 00:19:46.697 trsvcid: 4420 00:19:46.697 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:46.697 traddr: 10.0.0.1 00:19:46.697 eflags: none 00:19:46.697 sectype: none 00:19:46.697 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:46.697 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:46.697 ===================================================== 00:19:46.697 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:46.697 ===================================================== 00:19:46.697 Controller Capabilities/Features 00:19:46.697 ================================ 00:19:46.697 Vendor ID: 0000 00:19:46.697 Subsystem Vendor ID: 0000 00:19:46.697 Serial Number: b24fdbcb1fc7ef373f5f 00:19:46.697 Model Number: Linux 00:19:46.697 Firmware Version: 6.7.0-68 00:19:46.697 Recommended Arb Burst: 0 00:19:46.697 IEEE OUI Identifier: 00 00 00 00:19:46.697 Multi-path I/O 00:19:46.697 May have multiple subsystem ports: No 00:19:46.697 May have multiple controllers: No 00:19:46.697 Associated with SR-IOV VF: No 00:19:46.697 Max Data Transfer Size: Unlimited 00:19:46.697 Max Number of Namespaces: 0 00:19:46.697 Max Number of I/O Queues: 1024 00:19:46.697 NVMe Specification Version (VS): 1.3 00:19:46.697 NVMe Specification Version (Identify): 1.3 00:19:46.697 Maximum Queue Entries: 1024 00:19:46.697 Contiguous Queues Required: No 00:19:46.697 Arbitration Mechanisms Supported 00:19:46.697 Weighted Round Robin: Not Supported 00:19:46.697 Vendor Specific: Not Supported 00:19:46.697 Reset Timeout: 7500 ms 00:19:46.697 Doorbell Stride: 4 bytes 00:19:46.697 NVM Subsystem Reset: Not Supported 00:19:46.698 Command Sets Supported 00:19:46.698 NVM Command Set: Supported 00:19:46.698 Boot Partition: Not Supported 00:19:46.698 Memory Page Size Minimum: 4096 bytes 00:19:46.698 Memory Page Size Maximum: 4096 bytes 00:19:46.698 Persistent Memory Region: Not Supported 00:19:46.698 Optional Asynchronous Events Supported 00:19:46.698 Namespace Attribute Notices: Not Supported 00:19:46.698 Firmware Activation Notices: Not Supported 00:19:46.698 ANA Change Notices: Not Supported 00:19:46.698 PLE Aggregate Log Change Notices: Not Supported 00:19:46.698 LBA Status Info Alert Notices: Not Supported 00:19:46.698 EGE Aggregate Log Change Notices: Not Supported 00:19:46.698 Normal NVM Subsystem Shutdown event: Not Supported 00:19:46.698 Zone Descriptor Change Notices: Not Supported 00:19:46.698 Discovery Log Change Notices: Supported 00:19:46.698 Controller Attributes 00:19:46.698 128-bit Host Identifier: Not Supported 00:19:46.698 Non-Operational Permissive Mode: Not Supported 00:19:46.698 NVM Sets: Not Supported 00:19:46.698 Read Recovery Levels: Not Supported 00:19:46.698 Endurance Groups: Not Supported 00:19:46.698 Predictable Latency Mode: Not Supported 00:19:46.698 Traffic Based Keep ALive: Not Supported 00:19:46.698 Namespace Granularity: Not Supported 00:19:46.698 SQ Associations: Not Supported 00:19:46.698 UUID List: Not Supported 00:19:46.698 Multi-Domain Subsystem: Not Supported 00:19:46.698 Fixed Capacity Management: Not Supported 00:19:46.698 Variable Capacity Management: Not Supported 00:19:46.698 Delete Endurance Group: Not Supported 00:19:46.698 Delete NVM Set: Not Supported 00:19:46.698 Extended LBA Formats Supported: Not Supported 00:19:46.698 Flexible Data Placement Supported: Not Supported 00:19:46.698 00:19:46.698 Controller Memory Buffer Support 00:19:46.698 ================================ 00:19:46.698 Supported: No 00:19:46.698 00:19:46.698 Persistent Memory Region Support 00:19:46.698 ================================ 00:19:46.698 Supported: No 00:19:46.698 00:19:46.698 Admin Command Set Attributes 00:19:46.698 ============================ 00:19:46.698 Security Send/Receive: Not Supported 00:19:46.698 Format NVM: Not Supported 00:19:46.698 Firmware Activate/Download: Not Supported 00:19:46.698 Namespace Management: Not Supported 00:19:46.698 Device Self-Test: Not Supported 00:19:46.698 Directives: Not Supported 00:19:46.698 NVMe-MI: Not Supported 00:19:46.698 Virtualization Management: Not Supported 00:19:46.698 Doorbell Buffer Config: Not Supported 00:19:46.698 Get LBA Status Capability: Not Supported 00:19:46.698 Command & Feature Lockdown Capability: Not Supported 00:19:46.698 Abort Command Limit: 1 00:19:46.698 Async Event Request Limit: 1 00:19:46.698 Number of Firmware Slots: N/A 00:19:46.698 Firmware Slot 1 Read-Only: N/A 00:19:46.698 Firmware Activation Without Reset: N/A 00:19:46.698 Multiple Update Detection Support: N/A 00:19:46.698 Firmware Update Granularity: No Information Provided 00:19:46.698 Per-Namespace SMART Log: No 00:19:46.698 Asymmetric Namespace Access Log Page: Not Supported 00:19:46.698 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:46.698 Command Effects Log Page: Not Supported 00:19:46.698 Get Log Page Extended Data: Supported 00:19:46.698 Telemetry Log Pages: Not Supported 00:19:46.698 Persistent Event Log Pages: Not Supported 00:19:46.698 Supported Log Pages Log Page: May Support 00:19:46.698 Commands Supported & Effects Log Page: Not Supported 00:19:46.698 Feature Identifiers & Effects Log Page:May Support 00:19:46.698 NVMe-MI Commands & Effects Log Page: May Support 00:19:46.698 Data Area 4 for Telemetry Log: Not Supported 00:19:46.698 Error Log Page Entries Supported: 1 00:19:46.698 Keep Alive: Not Supported 00:19:46.698 00:19:46.698 NVM Command Set Attributes 00:19:46.698 ========================== 00:19:46.698 Submission Queue Entry Size 00:19:46.698 Max: 1 00:19:46.698 Min: 1 00:19:46.698 Completion Queue Entry Size 00:19:46.698 Max: 1 00:19:46.698 Min: 1 00:19:46.698 Number of Namespaces: 0 00:19:46.698 Compare Command: Not Supported 00:19:46.698 Write Uncorrectable Command: Not Supported 00:19:46.698 Dataset Management Command: Not Supported 00:19:46.698 Write Zeroes Command: Not Supported 00:19:46.698 Set Features Save Field: Not Supported 00:19:46.698 Reservations: Not Supported 00:19:46.698 Timestamp: Not Supported 00:19:46.698 Copy: Not Supported 00:19:46.698 Volatile Write Cache: Not Present 00:19:46.698 Atomic Write Unit (Normal): 1 00:19:46.698 Atomic Write Unit (PFail): 1 00:19:46.698 Atomic Compare & Write Unit: 1 00:19:46.698 Fused Compare & Write: Not Supported 00:19:46.698 Scatter-Gather List 00:19:46.698 SGL Command Set: Supported 00:19:46.698 SGL Keyed: Not Supported 00:19:46.698 SGL Bit Bucket Descriptor: Not Supported 00:19:46.698 SGL Metadata Pointer: Not Supported 00:19:46.698 Oversized SGL: Not Supported 00:19:46.698 SGL Metadata Address: Not Supported 00:19:46.698 SGL Offset: Supported 00:19:46.698 Transport SGL Data Block: Not Supported 00:19:46.698 Replay Protected Memory Block: Not Supported 00:19:46.698 00:19:46.698 Firmware Slot Information 00:19:46.698 ========================= 00:19:46.698 Active slot: 0 00:19:46.698 00:19:46.698 00:19:46.698 Error Log 00:19:46.698 ========= 00:19:46.698 00:19:46.698 Active Namespaces 00:19:46.698 ================= 00:19:46.698 Discovery Log Page 00:19:46.698 ================== 00:19:46.698 Generation Counter: 2 00:19:46.698 Number of Records: 2 00:19:46.698 Record Format: 0 00:19:46.698 00:19:46.698 Discovery Log Entry 0 00:19:46.698 ---------------------- 00:19:46.698 Transport Type: 3 (TCP) 00:19:46.698 Address Family: 1 (IPv4) 00:19:46.698 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:46.698 Entry Flags: 00:19:46.698 Duplicate Returned Information: 0 00:19:46.698 Explicit Persistent Connection Support for Discovery: 0 00:19:46.698 Transport Requirements: 00:19:46.698 Secure Channel: Not Specified 00:19:46.698 Port ID: 1 (0x0001) 00:19:46.698 Controller ID: 65535 (0xffff) 00:19:46.698 Admin Max SQ Size: 32 00:19:46.698 Transport Service Identifier: 4420 00:19:46.698 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:46.698 Transport Address: 10.0.0.1 00:19:46.698 Discovery Log Entry 1 00:19:46.698 ---------------------- 00:19:46.698 Transport Type: 3 (TCP) 00:19:46.698 Address Family: 1 (IPv4) 00:19:46.698 Subsystem Type: 2 (NVM Subsystem) 00:19:46.698 Entry Flags: 00:19:46.698 Duplicate Returned Information: 0 00:19:46.698 Explicit Persistent Connection Support for Discovery: 0 00:19:46.698 Transport Requirements: 00:19:46.698 Secure Channel: Not Specified 00:19:46.698 Port ID: 1 (0x0001) 00:19:46.698 Controller ID: 65535 (0xffff) 00:19:46.698 Admin Max SQ Size: 32 00:19:46.698 Transport Service Identifier: 4420 00:19:46.698 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:46.698 Transport Address: 10.0.0.1 00:19:46.698 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:46.957 get_feature(0x01) failed 00:19:46.957 get_feature(0x02) failed 00:19:46.957 get_feature(0x04) failed 00:19:46.957 ===================================================== 00:19:46.957 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:46.957 ===================================================== 00:19:46.957 Controller Capabilities/Features 00:19:46.957 ================================ 00:19:46.957 Vendor ID: 0000 00:19:46.957 Subsystem Vendor ID: 0000 00:19:46.957 Serial Number: db378776da226eba7fdd 00:19:46.958 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:46.958 Firmware Version: 6.7.0-68 00:19:46.958 Recommended Arb Burst: 6 00:19:46.958 IEEE OUI Identifier: 00 00 00 00:19:46.958 Multi-path I/O 00:19:46.958 May have multiple subsystem ports: Yes 00:19:46.958 May have multiple controllers: Yes 00:19:46.958 Associated with SR-IOV VF: No 00:19:46.958 Max Data Transfer Size: Unlimited 00:19:46.958 Max Number of Namespaces: 1024 00:19:46.958 Max Number of I/O Queues: 128 00:19:46.958 NVMe Specification Version (VS): 1.3 00:19:46.958 NVMe Specification Version (Identify): 1.3 00:19:46.958 Maximum Queue Entries: 1024 00:19:46.958 Contiguous Queues Required: No 00:19:46.958 Arbitration Mechanisms Supported 00:19:46.958 Weighted Round Robin: Not Supported 00:19:46.958 Vendor Specific: Not Supported 00:19:46.958 Reset Timeout: 7500 ms 00:19:46.958 Doorbell Stride: 4 bytes 00:19:46.958 NVM Subsystem Reset: Not Supported 00:19:46.958 Command Sets Supported 00:19:46.958 NVM Command Set: Supported 00:19:46.958 Boot Partition: Not Supported 00:19:46.958 Memory Page Size Minimum: 4096 bytes 00:19:46.958 Memory Page Size Maximum: 4096 bytes 00:19:46.958 Persistent Memory Region: Not Supported 00:19:46.958 Optional Asynchronous Events Supported 00:19:46.958 Namespace Attribute Notices: Supported 00:19:46.958 Firmware Activation Notices: Not Supported 00:19:46.958 ANA Change Notices: Supported 00:19:46.958 PLE Aggregate Log Change Notices: Not Supported 00:19:46.958 LBA Status Info Alert Notices: Not Supported 00:19:46.958 EGE Aggregate Log Change Notices: Not Supported 00:19:46.958 Normal NVM Subsystem Shutdown event: Not Supported 00:19:46.958 Zone Descriptor Change Notices: Not Supported 00:19:46.958 Discovery Log Change Notices: Not Supported 00:19:46.958 Controller Attributes 00:19:46.958 128-bit Host Identifier: Supported 00:19:46.958 Non-Operational Permissive Mode: Not Supported 00:19:46.958 NVM Sets: Not Supported 00:19:46.958 Read Recovery Levels: Not Supported 00:19:46.958 Endurance Groups: Not Supported 00:19:46.958 Predictable Latency Mode: Not Supported 00:19:46.958 Traffic Based Keep ALive: Supported 00:19:46.958 Namespace Granularity: Not Supported 00:19:46.958 SQ Associations: Not Supported 00:19:46.958 UUID List: Not Supported 00:19:46.958 Multi-Domain Subsystem: Not Supported 00:19:46.958 Fixed Capacity Management: Not Supported 00:19:46.958 Variable Capacity Management: Not Supported 00:19:46.958 Delete Endurance Group: Not Supported 00:19:46.958 Delete NVM Set: Not Supported 00:19:46.958 Extended LBA Formats Supported: Not Supported 00:19:46.958 Flexible Data Placement Supported: Not Supported 00:19:46.958 00:19:46.958 Controller Memory Buffer Support 00:19:46.958 ================================ 00:19:46.958 Supported: No 00:19:46.958 00:19:46.958 Persistent Memory Region Support 00:19:46.958 ================================ 00:19:46.958 Supported: No 00:19:46.958 00:19:46.958 Admin Command Set Attributes 00:19:46.958 ============================ 00:19:46.958 Security Send/Receive: Not Supported 00:19:46.958 Format NVM: Not Supported 00:19:46.958 Firmware Activate/Download: Not Supported 00:19:46.958 Namespace Management: Not Supported 00:19:46.958 Device Self-Test: Not Supported 00:19:46.958 Directives: Not Supported 00:19:46.958 NVMe-MI: Not Supported 00:19:46.958 Virtualization Management: Not Supported 00:19:46.958 Doorbell Buffer Config: Not Supported 00:19:46.958 Get LBA Status Capability: Not Supported 00:19:46.958 Command & Feature Lockdown Capability: Not Supported 00:19:46.958 Abort Command Limit: 4 00:19:46.958 Async Event Request Limit: 4 00:19:46.958 Number of Firmware Slots: N/A 00:19:46.958 Firmware Slot 1 Read-Only: N/A 00:19:46.958 Firmware Activation Without Reset: N/A 00:19:46.958 Multiple Update Detection Support: N/A 00:19:46.958 Firmware Update Granularity: No Information Provided 00:19:46.958 Per-Namespace SMART Log: Yes 00:19:46.958 Asymmetric Namespace Access Log Page: Supported 00:19:46.958 ANA Transition Time : 10 sec 00:19:46.958 00:19:46.958 Asymmetric Namespace Access Capabilities 00:19:46.958 ANA Optimized State : Supported 00:19:46.958 ANA Non-Optimized State : Supported 00:19:46.958 ANA Inaccessible State : Supported 00:19:46.958 ANA Persistent Loss State : Supported 00:19:46.958 ANA Change State : Supported 00:19:46.958 ANAGRPID is not changed : No 00:19:46.958 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:46.958 00:19:46.958 ANA Group Identifier Maximum : 128 00:19:46.958 Number of ANA Group Identifiers : 128 00:19:46.958 Max Number of Allowed Namespaces : 1024 00:19:46.958 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:46.958 Command Effects Log Page: Supported 00:19:46.958 Get Log Page Extended Data: Supported 00:19:46.958 Telemetry Log Pages: Not Supported 00:19:46.958 Persistent Event Log Pages: Not Supported 00:19:46.958 Supported Log Pages Log Page: May Support 00:19:46.958 Commands Supported & Effects Log Page: Not Supported 00:19:46.958 Feature Identifiers & Effects Log Page:May Support 00:19:46.958 NVMe-MI Commands & Effects Log Page: May Support 00:19:46.958 Data Area 4 for Telemetry Log: Not Supported 00:19:46.958 Error Log Page Entries Supported: 128 00:19:46.958 Keep Alive: Supported 00:19:46.958 Keep Alive Granularity: 1000 ms 00:19:46.958 00:19:46.958 NVM Command Set Attributes 00:19:46.958 ========================== 00:19:46.958 Submission Queue Entry Size 00:19:46.958 Max: 64 00:19:46.958 Min: 64 00:19:46.958 Completion Queue Entry Size 00:19:46.958 Max: 16 00:19:46.958 Min: 16 00:19:46.958 Number of Namespaces: 1024 00:19:46.958 Compare Command: Not Supported 00:19:46.958 Write Uncorrectable Command: Not Supported 00:19:46.958 Dataset Management Command: Supported 00:19:46.958 Write Zeroes Command: Supported 00:19:46.958 Set Features Save Field: Not Supported 00:19:46.958 Reservations: Not Supported 00:19:46.958 Timestamp: Not Supported 00:19:46.958 Copy: Not Supported 00:19:46.958 Volatile Write Cache: Present 00:19:46.958 Atomic Write Unit (Normal): 1 00:19:46.958 Atomic Write Unit (PFail): 1 00:19:46.958 Atomic Compare & Write Unit: 1 00:19:46.958 Fused Compare & Write: Not Supported 00:19:46.958 Scatter-Gather List 00:19:46.958 SGL Command Set: Supported 00:19:46.958 SGL Keyed: Not Supported 00:19:46.958 SGL Bit Bucket Descriptor: Not Supported 00:19:46.958 SGL Metadata Pointer: Not Supported 00:19:46.958 Oversized SGL: Not Supported 00:19:46.958 SGL Metadata Address: Not Supported 00:19:46.958 SGL Offset: Supported 00:19:46.958 Transport SGL Data Block: Not Supported 00:19:46.958 Replay Protected Memory Block: Not Supported 00:19:46.958 00:19:46.958 Firmware Slot Information 00:19:46.958 ========================= 00:19:46.958 Active slot: 0 00:19:46.958 00:19:46.958 Asymmetric Namespace Access 00:19:46.958 =========================== 00:19:46.958 Change Count : 0 00:19:46.958 Number of ANA Group Descriptors : 1 00:19:46.958 ANA Group Descriptor : 0 00:19:46.958 ANA Group ID : 1 00:19:46.958 Number of NSID Values : 1 00:19:46.958 Change Count : 0 00:19:46.958 ANA State : 1 00:19:46.958 Namespace Identifier : 1 00:19:46.958 00:19:46.958 Commands Supported and Effects 00:19:46.958 ============================== 00:19:46.958 Admin Commands 00:19:46.958 -------------- 00:19:46.958 Get Log Page (02h): Supported 00:19:46.958 Identify (06h): Supported 00:19:46.958 Abort (08h): Supported 00:19:46.958 Set Features (09h): Supported 00:19:46.958 Get Features (0Ah): Supported 00:19:46.958 Asynchronous Event Request (0Ch): Supported 00:19:46.958 Keep Alive (18h): Supported 00:19:46.958 I/O Commands 00:19:46.958 ------------ 00:19:46.958 Flush (00h): Supported 00:19:46.958 Write (01h): Supported LBA-Change 00:19:46.958 Read (02h): Supported 00:19:46.958 Write Zeroes (08h): Supported LBA-Change 00:19:46.958 Dataset Management (09h): Supported 00:19:46.958 00:19:46.958 Error Log 00:19:46.958 ========= 00:19:46.958 Entry: 0 00:19:46.958 Error Count: 0x3 00:19:46.958 Submission Queue Id: 0x0 00:19:46.958 Command Id: 0x5 00:19:46.958 Phase Bit: 0 00:19:46.958 Status Code: 0x2 00:19:46.958 Status Code Type: 0x0 00:19:46.958 Do Not Retry: 1 00:19:46.958 Error Location: 0x28 00:19:46.958 LBA: 0x0 00:19:46.958 Namespace: 0x0 00:19:46.958 Vendor Log Page: 0x0 00:19:46.958 ----------- 00:19:46.958 Entry: 1 00:19:46.958 Error Count: 0x2 00:19:46.958 Submission Queue Id: 0x0 00:19:46.958 Command Id: 0x5 00:19:46.958 Phase Bit: 0 00:19:46.958 Status Code: 0x2 00:19:46.958 Status Code Type: 0x0 00:19:46.958 Do Not Retry: 1 00:19:46.958 Error Location: 0x28 00:19:46.958 LBA: 0x0 00:19:46.958 Namespace: 0x0 00:19:46.958 Vendor Log Page: 0x0 00:19:46.958 ----------- 00:19:46.958 Entry: 2 00:19:46.958 Error Count: 0x1 00:19:46.958 Submission Queue Id: 0x0 00:19:46.958 Command Id: 0x4 00:19:46.958 Phase Bit: 0 00:19:46.959 Status Code: 0x2 00:19:46.959 Status Code Type: 0x0 00:19:46.959 Do Not Retry: 1 00:19:46.959 Error Location: 0x28 00:19:46.959 LBA: 0x0 00:19:46.959 Namespace: 0x0 00:19:46.959 Vendor Log Page: 0x0 00:19:46.959 00:19:46.959 Number of Queues 00:19:46.959 ================ 00:19:46.959 Number of I/O Submission Queues: 128 00:19:46.959 Number of I/O Completion Queues: 128 00:19:46.959 00:19:46.959 ZNS Specific Controller Data 00:19:46.959 ============================ 00:19:46.959 Zone Append Size Limit: 0 00:19:46.959 00:19:46.959 00:19:46.959 Active Namespaces 00:19:46.959 ================= 00:19:46.959 get_feature(0x05) failed 00:19:46.959 Namespace ID:1 00:19:46.959 Command Set Identifier: NVM (00h) 00:19:46.959 Deallocate: Supported 00:19:46.959 Deallocated/Unwritten Error: Not Supported 00:19:46.959 Deallocated Read Value: Unknown 00:19:46.959 Deallocate in Write Zeroes: Not Supported 00:19:46.959 Deallocated Guard Field: 0xFFFF 00:19:46.959 Flush: Supported 00:19:46.959 Reservation: Not Supported 00:19:46.959 Namespace Sharing Capabilities: Multiple Controllers 00:19:46.959 Size (in LBAs): 1310720 (5GiB) 00:19:46.959 Capacity (in LBAs): 1310720 (5GiB) 00:19:46.959 Utilization (in LBAs): 1310720 (5GiB) 00:19:46.959 UUID: 7f31c87f-b46e-4b7d-9083-e863354fc3b2 00:19:46.959 Thin Provisioning: Not Supported 00:19:46.959 Per-NS Atomic Units: Yes 00:19:46.959 Atomic Boundary Size (Normal): 0 00:19:46.959 Atomic Boundary Size (PFail): 0 00:19:46.959 Atomic Boundary Offset: 0 00:19:46.959 NGUID/EUI64 Never Reused: No 00:19:46.959 ANA group ID: 1 00:19:46.959 Namespace Write Protected: No 00:19:46.959 Number of LBA Formats: 1 00:19:46.959 Current LBA Format: LBA Format #00 00:19:46.959 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:46.959 00:19:46.959 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:46.959 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:46.959 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:19:46.959 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:46.959 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:19:46.959 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:46.959 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:46.959 rmmod nvme_tcp 00:19:46.959 rmmod nvme_fabrics 00:19:47.217 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:47.218 06:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:47.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:48.044 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:48.044 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:48.044 00:19:48.044 real 0m2.933s 00:19:48.044 user 0m1.063s 00:19:48.044 sys 0m1.368s 00:19:48.044 06:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:48.044 06:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.044 ************************************ 00:19:48.044 END TEST nvmf_identify_kernel_target 00:19:48.044 ************************************ 00:19:48.044 06:06:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:48.044 06:06:39 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:48.044 06:06:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:48.044 06:06:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:48.044 06:06:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:48.044 ************************************ 00:19:48.044 START TEST nvmf_auth_host 00:19:48.044 ************************************ 00:19:48.044 06:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:48.302 * Looking for test storage... 00:19:48.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.302 06:06:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:48.303 Cannot find device "nvmf_tgt_br" 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:48.303 Cannot find device "nvmf_tgt_br2" 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:48.303 Cannot find device "nvmf_tgt_br" 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:48.303 Cannot find device "nvmf_tgt_br2" 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:48.303 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:48.303 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:48.303 06:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:48.303 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:48.303 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:48.561 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:48.561 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:48.561 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:48.561 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:48.561 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:48.561 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:48.561 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:48.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:19:48.562 00:19:48.562 --- 10.0.0.2 ping statistics --- 00:19:48.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.562 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:48.562 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:48.562 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:19:48.562 00:19:48.562 --- 10.0.0.3 ping statistics --- 00:19:48.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.562 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:48.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:19:48.562 00:19:48.562 --- 10.0.0.1 ping statistics --- 00:19:48.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.562 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=92524 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 92524 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 92524 ']' 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.562 06:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=644625c084d2df2658ccb932300fa1d4 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yQE 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 644625c084d2df2658ccb932300fa1d4 0 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 644625c084d2df2658ccb932300fa1d4 0 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=644625c084d2df2658ccb932300fa1d4 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yQE 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yQE 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.yQE 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=13936116cee36634daac1a40564e786ba4f3757824b718dfa937c6653e2385dc 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.te3 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 13936116cee36634daac1a40564e786ba4f3757824b718dfa937c6653e2385dc 3 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 13936116cee36634daac1a40564e786ba4f3757824b718dfa937c6653e2385dc 3 00:19:49.129 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=13936116cee36634daac1a40564e786ba4f3757824b718dfa937c6653e2385dc 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.te3 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.te3 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.te3 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ab09ed58f1dea0fb7e5298dd67b13471f05be0b32bc0b092 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ha4 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ab09ed58f1dea0fb7e5298dd67b13471f05be0b32bc0b092 0 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ab09ed58f1dea0fb7e5298dd67b13471f05be0b32bc0b092 0 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ab09ed58f1dea0fb7e5298dd67b13471f05be0b32bc0b092 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ha4 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ha4 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ha4 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8c48da53abd7d8e9ca8128e5cb4c84e471a512a4e350850e 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.LDE 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8c48da53abd7d8e9ca8128e5cb4c84e471a512a4e350850e 2 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8c48da53abd7d8e9ca8128e5cb4c84e471a512a4e350850e 2 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8c48da53abd7d8e9ca8128e5cb4c84e471a512a4e350850e 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:49.130 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.LDE 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.LDE 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.LDE 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cfbbc073f19734c68606c226f1f7cd06 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ZiR 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cfbbc073f19734c68606c226f1f7cd06 1 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cfbbc073f19734c68606c226f1f7cd06 1 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cfbbc073f19734c68606c226f1f7cd06 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ZiR 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ZiR 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ZiR 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=10b2eec677d9c56fbd48ec755df15d70 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FJ3 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 10b2eec677d9c56fbd48ec755df15d70 1 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 10b2eec677d9c56fbd48ec755df15d70 1 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=10b2eec677d9c56fbd48ec755df15d70 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:49.389 06:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.389 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FJ3 00:19:49.389 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FJ3 00:19:49.389 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.FJ3 00:19:49.389 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:49.389 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.389 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.389 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.389 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:49.389 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:49.389 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:49.389 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c81c102732aa4c8a95908fc60a2aae7e67878e9605a56790 00:19:49.389 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:49.389 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Lzr 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c81c102732aa4c8a95908fc60a2aae7e67878e9605a56790 2 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c81c102732aa4c8a95908fc60a2aae7e67878e9605a56790 2 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c81c102732aa4c8a95908fc60a2aae7e67878e9605a56790 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Lzr 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Lzr 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Lzr 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2736fb0e9e11fd6dc470164eb2b14c5f 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Mjy 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2736fb0e9e11fd6dc470164eb2b14c5f 0 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2736fb0e9e11fd6dc470164eb2b14c5f 0 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2736fb0e9e11fd6dc470164eb2b14c5f 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:49.390 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Mjy 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Mjy 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Mjy 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c66622c0c5731f31a2cc6eb8667e1de5fa430b3b1cc7a502f022a7f63cffd7ed 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZkR 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c66622c0c5731f31a2cc6eb8667e1de5fa430b3b1cc7a502f022a7f63cffd7ed 3 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c66622c0c5731f31a2cc6eb8667e1de5fa430b3b1cc7a502f022a7f63cffd7ed 3 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c66622c0c5731f31a2cc6eb8667e1de5fa430b3b1cc7a502f022a7f63cffd7ed 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZkR 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZkR 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ZkR 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92524 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 92524 ']' 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.649 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yQE 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.te3 ]] 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.te3 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ha4 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.LDE ]] 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LDE 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.906 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ZiR 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.FJ3 ]] 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FJ3 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Lzr 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Mjy ]] 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Mjy 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ZkR 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:49.907 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:50.164 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:50.164 06:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:50.422 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:50.422 Waiting for block devices as requested 00:19:50.422 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:50.679 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:51.245 No valid GPT data, bailing 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:51.245 No valid GPT data, bailing 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:51.245 06:06:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:51.504 No valid GPT data, bailing 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:51.504 No valid GPT data, bailing 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -a 10.0.0.1 -t tcp -s 4420 00:19:51.504 00:19:51.504 Discovery Log Number of Records 2, Generation counter 2 00:19:51.504 =====Discovery Log Entry 0====== 00:19:51.504 trtype: tcp 00:19:51.504 adrfam: ipv4 00:19:51.504 subtype: current discovery subsystem 00:19:51.504 treq: not specified, sq flow control disable supported 00:19:51.504 portid: 1 00:19:51.504 trsvcid: 4420 00:19:51.504 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:51.504 traddr: 10.0.0.1 00:19:51.504 eflags: none 00:19:51.504 sectype: none 00:19:51.504 =====Discovery Log Entry 1====== 00:19:51.504 trtype: tcp 00:19:51.504 adrfam: ipv4 00:19:51.504 subtype: nvme subsystem 00:19:51.504 treq: not specified, sq flow control disable supported 00:19:51.504 portid: 1 00:19:51.504 trsvcid: 4420 00:19:51.504 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:51.504 traddr: 10.0.0.1 00:19:51.504 eflags: none 00:19:51.504 sectype: none 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.504 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.763 nvme0n1 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.763 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.764 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.023 nvme0n1 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.023 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.282 nvme0n1 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.282 nvme0n1 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.282 06:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.541 nvme0n1 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:52.541 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.542 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.799 nvme0n1 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:19:52.799 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:19:52.800 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.800 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.058 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.316 nvme0n1 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.316 06:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.574 nvme0n1 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.574 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.575 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.833 nvme0n1 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.833 nvme0n1 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.833 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.093 nvme0n1 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.093 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.094 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.352 06:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.352 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.352 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.352 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:54.352 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.352 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.352 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:54.352 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:54.352 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:19:54.353 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:19:54.353 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.353 06:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:54.920 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:54.921 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:54.921 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.921 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.921 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.179 nvme0n1 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.179 06:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.438 nvme0n1 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.438 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.696 nvme0n1 00:19:55.696 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.696 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.696 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.696 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.696 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.697 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.955 nvme0n1 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.955 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.214 nvme0n1 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.214 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.215 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.474 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.474 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.474 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.474 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.474 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.474 06:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.474 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.474 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.474 06:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:56.474 06:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.474 06:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.474 06:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:56.474 06:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:56.474 06:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:19:56.474 06:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:19:56.474 06:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.474 06:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:58.374 06:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:19:58.374 06:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:19:58.374 06:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:19:58.374 06:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:58.374 06:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.374 06:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.374 06:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:58.374 06:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:58.374 06:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.374 06:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.374 06:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.374 06:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.374 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.941 nvme0n1 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.941 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.942 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.232 nvme0n1 00:19:59.232 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.232 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.232 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.232 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.232 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.232 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.501 06:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.787 nvme0n1 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:59.787 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.788 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.354 nvme0n1 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.354 06:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.921 nvme0n1 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.921 06:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.486 nvme0n1 00:20:01.486 06:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.486 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.486 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.486 06:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.486 06:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.486 06:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.743 06:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.309 nvme0n1 00:20:02.309 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.309 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.309 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.309 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.309 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.309 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.569 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.138 nvme0n1 00:20:03.138 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.138 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.138 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.138 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.138 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.138 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.398 06:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.967 nvme0n1 00:20:03.967 06:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.967 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.967 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.967 06:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.967 06:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.967 06:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.227 06:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.796 nvme0n1 00:20:04.796 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.796 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.796 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.796 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.796 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.796 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.796 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.796 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.796 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.796 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.057 nvme0n1 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.057 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.317 nvme0n1 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.317 06:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.317 nvme0n1 00:20:05.317 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.317 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.317 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.317 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.317 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.577 nvme0n1 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:05.577 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:05.578 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:05.578 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:05.578 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.578 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.578 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:05.578 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:05.578 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.578 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.578 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.578 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.837 nvme0n1 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:20:05.837 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.838 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 nvme0n1 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.097 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.357 nvme0n1 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.357 06:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.357 nvme0n1 00:20:06.357 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.357 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.357 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.357 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.357 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.617 nvme0n1 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.617 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.877 nvme0n1 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.877 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.878 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.878 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.878 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.878 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.878 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.878 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.878 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.878 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.878 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.878 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.878 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.878 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.137 nvme0n1 00:20:07.137 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.137 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.137 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.137 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.137 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.137 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.396 06:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.396 nvme0n1 00:20:07.396 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.396 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.396 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.396 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.396 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.655 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.914 nvme0n1 00:20:07.914 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.915 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.174 nvme0n1 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.174 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.175 06:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.434 nvme0n1 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.434 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.002 nvme0n1 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:09.002 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.003 06:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.571 nvme0n1 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.571 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.830 nvme0n1 00:20:09.830 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.830 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.830 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.830 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.830 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.089 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.089 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.090 06:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.349 nvme0n1 00:20:10.349 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.349 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.349 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.349 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.349 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.349 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.608 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.867 nvme0n1 00:20:10.867 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.867 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.867 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.867 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.867 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.867 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.126 06:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.692 nvme0n1 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.692 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.260 nvme0n1 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.260 06:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.830 nvme0n1 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.830 06:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.424 nvme0n1 00:20:13.425 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.425 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.425 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.425 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.425 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.425 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.425 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.425 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.425 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.425 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.691 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.260 nvme0n1 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:14.260 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.261 nvme0n1 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.261 06:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.520 nvme0n1 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.520 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.779 nvme0n1 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.779 nvme0n1 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.779 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.037 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.037 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.037 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.037 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.038 nvme0n1 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.038 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.296 nvme0n1 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.296 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.297 06:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.555 nvme0n1 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.555 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.814 nvme0n1 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.814 nvme0n1 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.814 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.072 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.073 nvme0n1 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.073 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.331 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.331 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.331 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.331 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.332 06:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.332 nvme0n1 00:20:16.332 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.332 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.332 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.332 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.332 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.332 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.596 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.597 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.859 nvme0n1 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.859 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.860 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:16.860 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:16.860 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:16.860 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.860 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.860 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:16.860 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.860 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:16.860 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:16.860 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:16.860 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.860 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.860 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.118 nvme0n1 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:17.118 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.119 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.376 nvme0n1 00:20:17.376 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.376 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.376 06:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.376 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.377 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.377 06:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.377 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.634 nvme0n1 00:20:17.634 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.634 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.634 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.634 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.634 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.634 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.634 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.634 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.634 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.634 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.634 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.635 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.202 nvme0n1 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.202 06:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.769 nvme0n1 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.769 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.027 nvme0n1 00:20:19.027 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.027 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.027 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.027 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.027 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.027 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.286 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.286 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.286 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.287 06:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.545 nvme0n1 00:20:19.545 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.545 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.545 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.545 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.545 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.545 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.804 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.062 nvme0n1 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ0NjI1YzA4NGQyZGYyNjU4Y2NiOTMyMzAwZmExZDRDCbJ3: 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: ]] 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTM5MzYxMTZjZWUzNjYzNGRhYWMxYTQwNTY0ZTc4NmJhNGYzNzU3ODI0YjcxOGRmYTkzN2M2NjUzZTIzODVkY07wd4w=: 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.062 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.321 06:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.888 nvme0n1 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.888 06:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.822 nvme0n1 00:20:21.822 06:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.822 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:21.822 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.822 06:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.822 06:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.822 06:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.822 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.822 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZiYmMwNzNmMTk3MzRjNjg2MDZjMjI2ZjFmN2NkMDZLb0jA: 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: ]] 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTBiMmVlYzY3N2Q5YzU2ZmJkNDhlYzc1NWRmMTVkNzANgd8Z: 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.823 06:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.388 nvme0n1 00:20:22.388 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.388 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.388 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.388 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.388 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.646 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzgxYzEwMjczMmFhNGM4YTk1OTA4ZmM2MGEyYWFlN2U2Nzg3OGU5NjA1YTU2NzkwAoSCHA==: 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: ]] 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjczNmZiMGU5ZTExZmQ2ZGM0NzAxNjRlYjJiMTRjNWaqik2x: 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.647 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.213 nvme0n1 00:20:23.213 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.213 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.213 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.213 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.213 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.213 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY2NjIyYzBjNTczMWYzMWEyY2M2ZWI4NjY3ZTFkZTVmYTQzMGIzYjFjYzdhNTAyZjAyMmE3ZjYzY2ZmZDdlZJXfI5A=: 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.472 06:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.472 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.038 nvme0n1 00:20:24.038 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.038 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.038 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.038 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.038 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.038 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWIwOWVkNThmMWRlYTBmYjdlNTI5OGRkNjdiMTM0NzFmMDViZTBiMzJiYzBiMDkyahDVDg==: 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OGRhNTNhYmQ3ZDhlOWNhODEyOGU1Y2I0Yzg0ZTQ3MWE1MTJhNGUzNTA4NTBllIKtkA==: 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.297 request: 00:20:24.297 { 00:20:24.297 "name": "nvme0", 00:20:24.297 "trtype": "tcp", 00:20:24.297 "traddr": "10.0.0.1", 00:20:24.297 "adrfam": "ipv4", 00:20:24.297 "trsvcid": "4420", 00:20:24.297 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:24.297 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:24.297 "prchk_reftag": false, 00:20:24.297 "prchk_guard": false, 00:20:24.297 "hdgst": false, 00:20:24.297 "ddgst": false, 00:20:24.297 "method": "bdev_nvme_attach_controller", 00:20:24.297 "req_id": 1 00:20:24.297 } 00:20:24.297 Got JSON-RPC error response 00:20:24.297 response: 00:20:24.297 { 00:20:24.297 "code": -5, 00:20:24.297 "message": "Input/output error" 00:20:24.297 } 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.297 request: 00:20:24.297 { 00:20:24.297 "name": "nvme0", 00:20:24.297 "trtype": "tcp", 00:20:24.297 "traddr": "10.0.0.1", 00:20:24.297 "adrfam": "ipv4", 00:20:24.297 "trsvcid": "4420", 00:20:24.297 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:24.297 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:24.297 "prchk_reftag": false, 00:20:24.297 "prchk_guard": false, 00:20:24.297 "hdgst": false, 00:20:24.297 "ddgst": false, 00:20:24.297 "dhchap_key": "key2", 00:20:24.297 "method": "bdev_nvme_attach_controller", 00:20:24.297 "req_id": 1 00:20:24.297 } 00:20:24.297 Got JSON-RPC error response 00:20:24.297 response: 00:20:24.297 { 00:20:24.297 "code": -5, 00:20:24.297 "message": "Input/output error" 00:20:24.297 } 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.297 06:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.556 request: 00:20:24.556 { 00:20:24.556 "name": "nvme0", 00:20:24.556 "trtype": "tcp", 00:20:24.556 "traddr": "10.0.0.1", 00:20:24.556 "adrfam": "ipv4", 00:20:24.556 "trsvcid": "4420", 00:20:24.556 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:24.556 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:24.556 "prchk_reftag": false, 00:20:24.556 "prchk_guard": false, 00:20:24.556 "hdgst": false, 00:20:24.556 "ddgst": false, 00:20:24.556 "dhchap_key": "key1", 00:20:24.556 "dhchap_ctrlr_key": "ckey2", 00:20:24.556 "method": "bdev_nvme_attach_controller", 00:20:24.556 "req_id": 1 00:20:24.556 } 00:20:24.556 Got JSON-RPC error response 00:20:24.556 response: 00:20:24.556 { 00:20:24.556 "code": -5, 00:20:24.556 "message": "Input/output error" 00:20:24.556 } 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:24.556 rmmod nvme_tcp 00:20:24.556 rmmod nvme_fabrics 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 92524 ']' 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 92524 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 92524 ']' 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 92524 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92524 00:20:24.556 killing process with pid 92524 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92524' 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 92524 00:20:24.556 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 92524 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:24.815 06:07:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:25.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:25.641 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:25.641 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:25.641 06:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.yQE /tmp/spdk.key-null.ha4 /tmp/spdk.key-sha256.ZiR /tmp/spdk.key-sha384.Lzr /tmp/spdk.key-sha512.ZkR /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:25.641 06:07:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:26.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:26.208 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:26.208 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:26.208 ************************************ 00:20:26.208 END TEST nvmf_auth_host 00:20:26.208 ************************************ 00:20:26.208 00:20:26.208 real 0m38.006s 00:20:26.208 user 0m33.482s 00:20:26.208 sys 0m3.854s 00:20:26.208 06:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:26.208 06:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.208 06:07:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:26.208 06:07:17 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:20:26.208 06:07:17 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:26.208 06:07:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:26.208 06:07:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.208 06:07:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:26.208 ************************************ 00:20:26.208 START TEST nvmf_digest 00:20:26.208 ************************************ 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:26.208 * Looking for test storage... 00:20:26.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.208 06:07:17 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.209 06:07:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:26.467 Cannot find device "nvmf_tgt_br" 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:26.467 Cannot find device "nvmf_tgt_br2" 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:26.467 Cannot find device "nvmf_tgt_br" 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:20:26.467 06:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:26.467 Cannot find device "nvmf_tgt_br2" 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:26.467 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:26.467 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:26.467 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:26.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:20:26.725 00:20:26.725 --- 10.0.0.2 ping statistics --- 00:20:26.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.725 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:26.725 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:26.725 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:20:26.725 00:20:26.725 --- 10.0.0.3 ping statistics --- 00:20:26.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.725 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:26.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:26.725 00:20:26.725 --- 10.0.0.1 ping statistics --- 00:20:26.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.725 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:26.725 ************************************ 00:20:26.725 START TEST nvmf_digest_clean 00:20:26.725 ************************************ 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:26.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=94116 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 94116 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 94116 ']' 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.725 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:26.725 [2024-07-13 06:07:18.397315] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:26.725 [2024-07-13 06:07:18.397438] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.983 [2024-07-13 06:07:18.536810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.983 [2024-07-13 06:07:18.583710] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.983 [2024-07-13 06:07:18.583784] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.983 [2024-07-13 06:07:18.583798] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.983 [2024-07-13 06:07:18.583816] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.983 [2024-07-13 06:07:18.583825] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.983 [2024-07-13 06:07:18.583855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.983 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.983 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:26.983 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:26.983 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:26.983 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:27.255 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.255 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:27.256 [2024-07-13 06:07:18.762839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:27.256 null0 00:20:27.256 [2024-07-13 06:07:18.798349] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.256 [2024-07-13 06:07:18.822461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94139 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94139 /var/tmp/bperf.sock 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 94139 ']' 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:27.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:27.256 06:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:27.256 [2024-07-13 06:07:18.885865] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:27.256 [2024-07-13 06:07:18.886182] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94139 ] 00:20:27.525 [2024-07-13 06:07:19.027738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.525 [2024-07-13 06:07:19.073517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.525 06:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:27.525 06:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:27.525 06:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:27.525 06:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:27.525 06:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:27.784 [2024-07-13 06:07:19.429906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:27.784 06:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:27.784 06:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:28.350 nvme0n1 00:20:28.350 06:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:28.350 06:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:28.350 Running I/O for 2 seconds... 00:20:30.252 00:20:30.252 Latency(us) 00:20:30.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.252 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:30.252 nvme0n1 : 2.01 13076.91 51.08 0.00 0.00 9781.00 8757.99 20614.05 00:20:30.252 =================================================================================================================== 00:20:30.252 Total : 13076.91 51.08 0.00 0.00 9781.00 8757.99 20614.05 00:20:30.252 0 00:20:30.252 06:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:30.252 06:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:30.252 06:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:30.252 06:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:30.252 | select(.opcode=="crc32c") 00:20:30.252 | "\(.module_name) \(.executed)"' 00:20:30.252 06:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:30.510 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:30.510 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:30.510 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:30.510 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:30.511 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94139 00:20:30.511 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 94139 ']' 00:20:30.511 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 94139 00:20:30.511 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:30.511 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:30.511 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94139 00:20:30.769 killing process with pid 94139 00:20:30.769 Received shutdown signal, test time was about 2.000000 seconds 00:20:30.769 00:20:30.769 Latency(us) 00:20:30.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.769 =================================================================================================================== 00:20:30.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94139' 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 94139 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 94139 00:20:30.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94192 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94192 /var/tmp/bperf.sock 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 94192 ']' 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:30.769 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.770 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:30.770 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:30.770 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:30.770 Zero copy mechanism will not be used. 00:20:30.770 [2024-07-13 06:07:22.446892] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:30.770 [2024-07-13 06:07:22.446999] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94192 ] 00:20:31.028 [2024-07-13 06:07:22.585017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.028 [2024-07-13 06:07:22.631161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.028 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.028 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:31.028 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:31.028 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:31.028 06:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:31.286 [2024-07-13 06:07:22.995208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:31.545 06:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:31.545 06:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:31.803 nvme0n1 00:20:31.803 06:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:31.803 06:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:31.803 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:31.803 Zero copy mechanism will not be used. 00:20:31.803 Running I/O for 2 seconds... 00:20:34.333 00:20:34.333 Latency(us) 00:20:34.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.333 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:34.333 nvme0n1 : 2.00 6738.00 842.25 0.00 0.00 2370.56 2055.45 9889.98 00:20:34.333 =================================================================================================================== 00:20:34.333 Total : 6738.00 842.25 0.00 0.00 2370.56 2055.45 9889.98 00:20:34.333 0 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:34.333 | select(.opcode=="crc32c") 00:20:34.333 | "\(.module_name) \(.executed)"' 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94192 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 94192 ']' 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 94192 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94192 00:20:34.333 killing process with pid 94192 00:20:34.333 Received shutdown signal, test time was about 2.000000 seconds 00:20:34.333 00:20:34.333 Latency(us) 00:20:34.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.333 =================================================================================================================== 00:20:34.333 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94192' 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 94192 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 94192 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94239 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94239 /var/tmp/bperf.sock 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 94239 ']' 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:34.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.333 06:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:34.333 [2024-07-13 06:07:26.014346] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:34.333 [2024-07-13 06:07:26.014584] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94239 ] 00:20:34.591 [2024-07-13 06:07:26.149274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.591 [2024-07-13 06:07:26.190689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.591 06:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.591 06:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:34.591 06:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:34.591 06:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:34.591 06:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:34.850 [2024-07-13 06:07:26.518551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:34.850 06:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:34.850 06:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:35.416 nvme0n1 00:20:35.416 06:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:35.416 06:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:35.416 Running I/O for 2 seconds... 00:20:37.328 00:20:37.328 Latency(us) 00:20:37.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.328 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:37.328 nvme0n1 : 2.01 16282.94 63.61 0.00 0.00 7842.13 2546.97 17754.30 00:20:37.328 =================================================================================================================== 00:20:37.328 Total : 16282.94 63.61 0.00 0.00 7842.13 2546.97 17754.30 00:20:37.328 0 00:20:37.586 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:37.586 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:37.586 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:37.586 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:37.586 | select(.opcode=="crc32c") 00:20:37.586 | "\(.module_name) \(.executed)"' 00:20:37.586 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94239 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 94239 ']' 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 94239 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94239 00:20:37.845 killing process with pid 94239 00:20:37.845 Received shutdown signal, test time was about 2.000000 seconds 00:20:37.845 00:20:37.845 Latency(us) 00:20:37.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.845 =================================================================================================================== 00:20:37.845 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94239' 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 94239 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 94239 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94288 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94288 /var/tmp/bperf.sock 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 94288 ']' 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:37.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.845 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:38.103 [2024-07-13 06:07:29.591615] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:38.103 [2024-07-13 06:07:29.591925] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94288 ] 00:20:38.103 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:38.103 Zero copy mechanism will not be used. 00:20:38.103 [2024-07-13 06:07:29.734571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.103 [2024-07-13 06:07:29.779344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.103 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.103 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:38.103 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:38.103 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:38.103 06:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:38.669 [2024-07-13 06:07:30.122077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:38.669 06:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:38.670 06:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:38.927 nvme0n1 00:20:38.927 06:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:38.927 06:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:38.927 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:38.927 Zero copy mechanism will not be used. 00:20:38.927 Running I/O for 2 seconds... 00:20:41.456 00:20:41.456 Latency(us) 00:20:41.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.456 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:41.456 nvme0n1 : 2.00 5434.54 679.32 0.00 0.00 2936.75 1541.59 4468.36 00:20:41.456 =================================================================================================================== 00:20:41.456 Total : 5434.54 679.32 0.00 0.00 2936.75 1541.59 4468.36 00:20:41.456 0 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:41.456 | select(.opcode=="crc32c") 00:20:41.456 | "\(.module_name) \(.executed)"' 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94288 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 94288 ']' 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 94288 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94288 00:20:41.456 killing process with pid 94288 00:20:41.456 Received shutdown signal, test time was about 2.000000 seconds 00:20:41.456 00:20:41.456 Latency(us) 00:20:41.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.456 =================================================================================================================== 00:20:41.456 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94288' 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 94288 00:20:41.456 06:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 94288 00:20:41.456 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94116 00:20:41.456 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 94116 ']' 00:20:41.456 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 94116 00:20:41.456 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:41.456 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:41.456 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94116 00:20:41.456 killing process with pid 94116 00:20:41.456 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:41.456 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:41.456 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94116' 00:20:41.456 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 94116 00:20:41.456 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 94116 00:20:41.714 ************************************ 00:20:41.714 END TEST nvmf_digest_clean 00:20:41.714 ************************************ 00:20:41.714 00:20:41.714 real 0m14.916s 00:20:41.714 user 0m29.039s 00:20:41.714 sys 0m4.237s 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:41.714 ************************************ 00:20:41.714 START TEST nvmf_digest_error 00:20:41.714 ************************************ 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:41.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=94367 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 94367 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94367 ']' 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.714 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:41.714 [2024-07-13 06:07:33.382398] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:41.714 [2024-07-13 06:07:33.382537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.973 [2024-07-13 06:07:33.529179] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.973 [2024-07-13 06:07:33.571728] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.973 [2024-07-13 06:07:33.571778] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.973 [2024-07-13 06:07:33.571804] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.973 [2024-07-13 06:07:33.571812] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.973 [2024-07-13 06:07:33.571819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.973 [2024-07-13 06:07:33.571886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.973 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.973 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:41.973 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:41.973 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:41.973 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:41.973 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.973 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:41.973 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.973 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:41.973 [2024-07-13 06:07:33.664331] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:41.973 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.973 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:41.973 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:41.973 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.973 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.232 [2024-07-13 06:07:33.708448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:42.232 null0 00:20:42.232 [2024-07-13 06:07:33.742971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.232 [2024-07-13 06:07:33.767092] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94388 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94388 /var/tmp/bperf.sock 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94388 ']' 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:42.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.232 06:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.232 [2024-07-13 06:07:33.821736] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:42.232 [2024-07-13 06:07:33.821973] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94388 ] 00:20:42.490 [2024-07-13 06:07:33.961521] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.490 [2024-07-13 06:07:34.003892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.490 [2024-07-13 06:07:34.039194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:42.490 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.490 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:42.490 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:42.490 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:42.749 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:42.749 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.749 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.749 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.749 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:42.749 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:43.008 nvme0n1 00:20:43.008 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:43.008 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.008 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.008 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.008 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:43.008 06:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:43.277 Running I/O for 2 seconds... 00:20:43.277 [2024-07-13 06:07:34.831538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.277 [2024-07-13 06:07:34.831638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.277 [2024-07-13 06:07:34.831676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.277 [2024-07-13 06:07:34.850773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.277 [2024-07-13 06:07:34.850862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.277 [2024-07-13 06:07:34.850903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.277 [2024-07-13 06:07:34.871063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.277 [2024-07-13 06:07:34.871103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.277 [2024-07-13 06:07:34.871133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.277 [2024-07-13 06:07:34.889926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.277 [2024-07-13 06:07:34.889985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.277 [2024-07-13 06:07:34.890016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.277 [2024-07-13 06:07:34.909948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.277 [2024-07-13 06:07:34.909991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.277 [2024-07-13 06:07:34.910019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.277 [2024-07-13 06:07:34.929220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.277 [2024-07-13 06:07:34.929263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.277 [2024-07-13 06:07:34.929278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.277 [2024-07-13 06:07:34.948768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.277 [2024-07-13 06:07:34.948810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.277 [2024-07-13 06:07:34.948825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.277 [2024-07-13 06:07:34.968024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.277 [2024-07-13 06:07:34.968065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.277 [2024-07-13 06:07:34.968111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.277 [2024-07-13 06:07:34.987637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.277 [2024-07-13 06:07:34.987676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.277 [2024-07-13 06:07:34.987690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.553 [2024-07-13 06:07:35.006196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.553 [2024-07-13 06:07:35.006239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.553 [2024-07-13 06:07:35.006255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.553 [2024-07-13 06:07:35.025040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.553 [2024-07-13 06:07:35.025097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.553 [2024-07-13 06:07:35.025128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.553 [2024-07-13 06:07:35.045171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.554 [2024-07-13 06:07:35.045216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.554 [2024-07-13 06:07:35.045231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.554 [2024-07-13 06:07:35.063660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.554 [2024-07-13 06:07:35.063703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.554 [2024-07-13 06:07:35.063718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.554 [2024-07-13 06:07:35.083817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.554 [2024-07-13 06:07:35.083857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.554 [2024-07-13 06:07:35.083886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.554 [2024-07-13 06:07:35.104481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.554 [2024-07-13 06:07:35.104549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.554 [2024-07-13 06:07:35.104581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.554 [2024-07-13 06:07:35.124749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.554 [2024-07-13 06:07:35.124821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.554 [2024-07-13 06:07:35.124851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.554 [2024-07-13 06:07:35.145624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.554 [2024-07-13 06:07:35.145666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.554 [2024-07-13 06:07:35.145697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.554 [2024-07-13 06:07:35.166137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.554 [2024-07-13 06:07:35.166178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.554 [2024-07-13 06:07:35.166218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.554 [2024-07-13 06:07:35.186267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.554 [2024-07-13 06:07:35.186310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.554 [2024-07-13 06:07:35.186325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.554 [2024-07-13 06:07:35.206486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.554 [2024-07-13 06:07:35.206528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.554 [2024-07-13 06:07:35.206544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.554 [2024-07-13 06:07:35.226137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.554 [2024-07-13 06:07:35.226205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.554 [2024-07-13 06:07:35.226225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.554 [2024-07-13 06:07:35.245523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.554 [2024-07-13 06:07:35.245564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.554 [2024-07-13 06:07:35.245578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.554 [2024-07-13 06:07:35.264900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.554 [2024-07-13 06:07:35.264941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.554 [2024-07-13 06:07:35.264972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.813 [2024-07-13 06:07:35.285085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.813 [2024-07-13 06:07:35.285160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.813 [2024-07-13 06:07:35.285175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.813 [2024-07-13 06:07:35.304006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.813 [2024-07-13 06:07:35.304046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.813 [2024-07-13 06:07:35.304077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.813 [2024-07-13 06:07:35.323693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.813 [2024-07-13 06:07:35.323763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.813 [2024-07-13 06:07:35.323794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.813 [2024-07-13 06:07:35.344170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.813 [2024-07-13 06:07:35.344208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.813 [2024-07-13 06:07:35.344238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.813 [2024-07-13 06:07:35.364896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.813 [2024-07-13 06:07:35.364936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.813 [2024-07-13 06:07:35.364965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.813 [2024-07-13 06:07:35.384328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.813 [2024-07-13 06:07:35.384426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.813 [2024-07-13 06:07:35.384442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.813 [2024-07-13 06:07:35.403719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.813 [2024-07-13 06:07:35.403774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.813 [2024-07-13 06:07:35.403788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.813 [2024-07-13 06:07:35.422655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.813 [2024-07-13 06:07:35.422710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.813 [2024-07-13 06:07:35.422756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.813 [2024-07-13 06:07:35.442220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.813 [2024-07-13 06:07:35.442277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.813 [2024-07-13 06:07:35.442293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.813 [2024-07-13 06:07:35.461646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.813 [2024-07-13 06:07:35.461686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.813 [2024-07-13 06:07:35.461700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.813 [2024-07-13 06:07:35.480751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.813 [2024-07-13 06:07:35.480791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.813 [2024-07-13 06:07:35.480822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.813 [2024-07-13 06:07:35.500176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.813 [2024-07-13 06:07:35.500217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.813 [2024-07-13 06:07:35.500248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.813 [2024-07-13 06:07:35.519758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.813 [2024-07-13 06:07:35.519800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.813 [2024-07-13 06:07:35.519814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.813 [2024-07-13 06:07:35.539448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:43.813 [2024-07-13 06:07:35.539533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.813 [2024-07-13 06:07:35.539551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.072 [2024-07-13 06:07:35.559268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.072 [2024-07-13 06:07:35.559309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.072 [2024-07-13 06:07:35.559339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.072 [2024-07-13 06:07:35.579027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.072 [2024-07-13 06:07:35.579068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.072 [2024-07-13 06:07:35.579097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.072 [2024-07-13 06:07:35.598231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.072 [2024-07-13 06:07:35.598274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.072 [2024-07-13 06:07:35.598289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.072 [2024-07-13 06:07:35.618065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.072 [2024-07-13 06:07:35.618103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.072 [2024-07-13 06:07:35.618122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.072 [2024-07-13 06:07:35.637852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.072 [2024-07-13 06:07:35.637895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.072 [2024-07-13 06:07:35.637910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.072 [2024-07-13 06:07:35.657852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.072 [2024-07-13 06:07:35.657891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.072 [2024-07-13 06:07:35.657922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.072 [2024-07-13 06:07:35.677051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.072 [2024-07-13 06:07:35.677107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.072 [2024-07-13 06:07:35.677153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.072 [2024-07-13 06:07:35.697145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.072 [2024-07-13 06:07:35.697184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.072 [2024-07-13 06:07:35.697214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.072 [2024-07-13 06:07:35.717315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.072 [2024-07-13 06:07:35.717356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.072 [2024-07-13 06:07:35.717385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.072 [2024-07-13 06:07:35.737094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.072 [2024-07-13 06:07:35.737135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.072 [2024-07-13 06:07:35.737165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.072 [2024-07-13 06:07:35.757160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.072 [2024-07-13 06:07:35.757201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.072 [2024-07-13 06:07:35.757231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.072 [2024-07-13 06:07:35.777206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.072 [2024-07-13 06:07:35.777248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.072 [2024-07-13 06:07:35.777279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.072 [2024-07-13 06:07:35.795532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.072 [2024-07-13 06:07:35.795575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.072 [2024-07-13 06:07:35.795590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.331 [2024-07-13 06:07:35.815248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.331 [2024-07-13 06:07:35.815289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.331 [2024-07-13 06:07:35.815320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.331 [2024-07-13 06:07:35.835575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.331 [2024-07-13 06:07:35.835614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.331 [2024-07-13 06:07:35.835643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.331 [2024-07-13 06:07:35.856078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.331 [2024-07-13 06:07:35.856119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.331 [2024-07-13 06:07:35.856133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.331 [2024-07-13 06:07:35.875925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.331 [2024-07-13 06:07:35.875967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.331 [2024-07-13 06:07:35.875981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.331 [2024-07-13 06:07:35.895952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.331 [2024-07-13 06:07:35.896007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.331 [2024-07-13 06:07:35.896054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.331 [2024-07-13 06:07:35.915727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.331 [2024-07-13 06:07:35.915782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.331 [2024-07-13 06:07:35.915813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.331 [2024-07-13 06:07:35.935210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.331 [2024-07-13 06:07:35.935250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.331 [2024-07-13 06:07:35.935281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.331 [2024-07-13 06:07:35.955023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.331 [2024-07-13 06:07:35.955093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.331 [2024-07-13 06:07:35.955124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.331 [2024-07-13 06:07:35.975243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.331 [2024-07-13 06:07:35.975284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.331 [2024-07-13 06:07:35.975314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.331 [2024-07-13 06:07:35.995659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.331 [2024-07-13 06:07:35.995731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.331 [2024-07-13 06:07:35.995746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.331 [2024-07-13 06:07:36.015822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.331 [2024-07-13 06:07:36.015863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.331 [2024-07-13 06:07:36.015893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.331 [2024-07-13 06:07:36.035518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.331 [2024-07-13 06:07:36.035564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.331 [2024-07-13 06:07:36.035595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.331 [2024-07-13 06:07:36.055255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.331 [2024-07-13 06:07:36.055295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.331 [2024-07-13 06:07:36.055326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.590 [2024-07-13 06:07:36.083897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.590 [2024-07-13 06:07:36.083939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.590 [2024-07-13 06:07:36.083953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.590 [2024-07-13 06:07:36.103212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.590 [2024-07-13 06:07:36.103253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.590 [2024-07-13 06:07:36.103283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.590 [2024-07-13 06:07:36.122674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.590 [2024-07-13 06:07:36.122715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.590 [2024-07-13 06:07:36.122729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.590 [2024-07-13 06:07:36.142250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.590 [2024-07-13 06:07:36.142308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.590 [2024-07-13 06:07:36.142324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.590 [2024-07-13 06:07:36.161994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.590 [2024-07-13 06:07:36.162036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.590 [2024-07-13 06:07:36.162050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.590 [2024-07-13 06:07:36.181586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.590 [2024-07-13 06:07:36.181625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.590 [2024-07-13 06:07:36.181640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.590 [2024-07-13 06:07:36.201136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.590 [2024-07-13 06:07:36.201193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.590 [2024-07-13 06:07:36.201223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.590 [2024-07-13 06:07:36.220350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.590 [2024-07-13 06:07:36.220419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.590 [2024-07-13 06:07:36.220436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.590 [2024-07-13 06:07:36.239980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.590 [2024-07-13 06:07:36.240056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.590 [2024-07-13 06:07:36.240087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.590 [2024-07-13 06:07:36.259752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.590 [2024-07-13 06:07:36.259791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.590 [2024-07-13 06:07:36.259805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.590 [2024-07-13 06:07:36.279549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.590 [2024-07-13 06:07:36.279624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.590 [2024-07-13 06:07:36.279639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.590 [2024-07-13 06:07:36.298915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.590 [2024-07-13 06:07:36.298973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.590 [2024-07-13 06:07:36.298988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.849 [2024-07-13 06:07:36.318666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.849 [2024-07-13 06:07:36.318722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.849 [2024-07-13 06:07:36.318753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.849 [2024-07-13 06:07:36.338411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.849 [2024-07-13 06:07:36.338448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.849 [2024-07-13 06:07:36.338462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.849 [2024-07-13 06:07:36.358550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.849 [2024-07-13 06:07:36.358740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.849 [2024-07-13 06:07:36.358878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.849 [2024-07-13 06:07:36.378922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.849 [2024-07-13 06:07:36.379136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.849 [2024-07-13 06:07:36.379289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.849 [2024-07-13 06:07:36.399990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.849 [2024-07-13 06:07:36.400254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.849 [2024-07-13 06:07:36.400273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.849 [2024-07-13 06:07:36.420911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.849 [2024-07-13 06:07:36.421000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.849 [2024-07-13 06:07:36.421016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.849 [2024-07-13 06:07:36.441169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.849 [2024-07-13 06:07:36.441213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.849 [2024-07-13 06:07:36.441243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.849 [2024-07-13 06:07:36.460860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.849 [2024-07-13 06:07:36.460900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.849 [2024-07-13 06:07:36.460915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.849 [2024-07-13 06:07:36.479948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.849 [2024-07-13 06:07:36.479989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.849 [2024-07-13 06:07:36.480005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.849 [2024-07-13 06:07:36.498969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.849 [2024-07-13 06:07:36.499028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.849 [2024-07-13 06:07:36.499043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.849 [2024-07-13 06:07:36.518349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.849 [2024-07-13 06:07:36.518406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.849 [2024-07-13 06:07:36.518422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.849 [2024-07-13 06:07:36.538113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.849 [2024-07-13 06:07:36.538153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.849 [2024-07-13 06:07:36.538167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.849 [2024-07-13 06:07:36.558244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:44.849 [2024-07-13 06:07:36.558289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.849 [2024-07-13 06:07:36.558305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.108 [2024-07-13 06:07:36.578360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:45.108 [2024-07-13 06:07:36.578417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.108 [2024-07-13 06:07:36.578433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.108 [2024-07-13 06:07:36.598130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:45.108 [2024-07-13 06:07:36.598171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.108 [2024-07-13 06:07:36.598228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.109 [2024-07-13 06:07:36.618156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:45.109 [2024-07-13 06:07:36.618222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.109 [2024-07-13 06:07:36.618239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.109 [2024-07-13 06:07:36.637824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:45.109 [2024-07-13 06:07:36.637864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.109 [2024-07-13 06:07:36.637878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.109 [2024-07-13 06:07:36.657257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:45.109 [2024-07-13 06:07:36.657299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.109 [2024-07-13 06:07:36.657330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.109 [2024-07-13 06:07:36.676505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:45.109 [2024-07-13 06:07:36.676554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.109 [2024-07-13 06:07:36.676571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.109 [2024-07-13 06:07:36.696691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:45.109 [2024-07-13 06:07:36.696746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.109 [2024-07-13 06:07:36.696777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.109 [2024-07-13 06:07:36.716735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:45.109 [2024-07-13 06:07:36.716790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.109 [2024-07-13 06:07:36.716835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.109 [2024-07-13 06:07:36.736484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:45.109 [2024-07-13 06:07:36.736552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.109 [2024-07-13 06:07:36.736583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.109 [2024-07-13 06:07:36.756625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:45.109 [2024-07-13 06:07:36.756667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.109 [2024-07-13 06:07:36.756718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.109 [2024-07-13 06:07:36.775911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:45.109 [2024-07-13 06:07:36.775950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.109 [2024-07-13 06:07:36.775979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.109 [2024-07-13 06:07:36.795557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d0b40) 00:20:45.109 [2024-07-13 06:07:36.795645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.109 [2024-07-13 06:07:36.795692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.109 00:20:45.109 Latency(us) 00:20:45.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.109 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:45.109 nvme0n1 : 2.01 12785.54 49.94 0.00 0.00 10003.01 8698.41 37891.72 00:20:45.109 =================================================================================================================== 00:20:45.109 Total : 12785.54 49.94 0.00 0.00 10003.01 8698.41 37891.72 00:20:45.109 0 00:20:45.109 06:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:45.109 06:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:45.109 06:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:45.109 06:07:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:45.109 | .driver_specific 00:20:45.109 | .nvme_error 00:20:45.109 | .status_code 00:20:45.109 | .command_transient_transport_error' 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 100 > 0 )) 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94388 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94388 ']' 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94388 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94388 00:20:45.676 killing process with pid 94388 00:20:45.676 Received shutdown signal, test time was about 2.000000 seconds 00:20:45.676 00:20:45.676 Latency(us) 00:20:45.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.676 =================================================================================================================== 00:20:45.676 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94388' 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94388 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94388 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94436 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94436 /var/tmp/bperf.sock 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94436 ']' 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:45.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:45.676 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:45.676 [2024-07-13 06:07:37.356547] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:45.676 [2024-07-13 06:07:37.356808] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94436 ] 00:20:45.676 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:45.676 Zero copy mechanism will not be used. 00:20:45.934 [2024-07-13 06:07:37.498035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.934 [2024-07-13 06:07:37.537141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.934 [2024-07-13 06:07:37.572597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:45.934 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.934 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:45.934 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:45.934 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:46.193 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:46.193 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.193 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:46.193 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.193 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:46.193 06:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:46.760 nvme0n1 00:20:46.760 06:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:46.760 06:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.760 06:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:46.760 06:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.760 06:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:46.760 06:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:46.760 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:46.760 Zero copy mechanism will not be used. 00:20:46.760 Running I/O for 2 seconds... 00:20:46.760 [2024-07-13 06:07:38.403140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.403241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.403259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.408313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.408401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.408418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.413675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.413719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.413734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.419052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.419112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.419159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.424080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.424139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.424183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.429618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.429659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.429705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.434629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.434679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.434695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.439321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.439364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.439412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.444378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.444448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.444480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.449559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.449598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.449645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.454718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.454758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.454788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.459897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.459939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.459954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.464846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.464888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.464903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.469648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.469719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.469734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.474562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.474617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.474648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.479414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.479468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.479483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.760 [2024-07-13 06:07:38.484161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:46.760 [2024-07-13 06:07:38.484203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.760 [2024-07-13 06:07:38.484217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.019 [2024-07-13 06:07:38.488979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.019 [2024-07-13 06:07:38.489021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.019 [2024-07-13 06:07:38.489036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.019 [2024-07-13 06:07:38.493593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.019 [2024-07-13 06:07:38.493635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.019 [2024-07-13 06:07:38.493650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.019 [2024-07-13 06:07:38.498578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.019 [2024-07-13 06:07:38.498634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.019 [2024-07-13 06:07:38.498648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.019 [2024-07-13 06:07:38.503453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.019 [2024-07-13 06:07:38.503493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.019 [2024-07-13 06:07:38.503507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.019 [2024-07-13 06:07:38.508245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.019 [2024-07-13 06:07:38.508286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.508300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.513046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.513087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.513101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.517938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.517993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.518024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.522654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.522694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.522724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.527712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.527753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.527768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.532872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.532912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.532943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.537932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.537973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.538004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.542901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.542974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.543009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.548352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.548416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.548451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.553458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.553512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.553545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.558466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.558508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.558522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.563360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.563429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.563461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.568284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.568324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.568339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.573256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.573296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.573310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.578500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.578541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.578556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.583499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.583555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.583571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.588686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.588726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.588755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.594062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.594120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.594165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.599491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.599551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.599567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.604727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.604766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.604780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.609893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.609934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.609948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.614558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.614597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.614611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.619280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.619320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.619351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.624317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.624358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.624399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.629693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.629767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.629781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.634802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.634842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.634873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.639682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.639769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.639797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.644798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.644836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.644866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.649959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.020 [2024-07-13 06:07:38.650014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.020 [2024-07-13 06:07:38.650028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.020 [2024-07-13 06:07:38.654901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.654940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.654970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.659879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.659918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.659932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.664891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.664931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.664962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.669909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.669951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.669981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.675082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.675123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.675170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.680058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.680112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.680142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.684998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.685039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.685069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.689911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.689951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.689981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.695147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.695204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.695219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.700440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.700494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.700510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.705536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.705576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.705590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.710905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.710945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.710959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.715934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.715987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.716018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.720501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.720539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.720553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.725022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.725056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.725070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.729596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.729631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.729645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.734240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.734274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.734288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.739253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.739293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.739306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.021 [2024-07-13 06:07:38.744246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.021 [2024-07-13 06:07:38.744317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.021 [2024-07-13 06:07:38.744331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.749194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.749232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.749252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.754355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.754403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.754417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.758972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.759009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.759022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.763625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.763677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.763691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.768898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.768951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.768964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.774029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.774083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.774110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.779529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.779610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.779626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.784991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.785044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.785057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.790130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.790182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.790222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.795009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.795077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.795121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.799896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.799949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.799964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.804998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.805049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.805062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.810110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.810163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.810176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.815195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.815249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.815263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.820052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.820090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.820103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.825029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.825065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.825078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.830000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.830037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.830084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.835116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.835154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.835175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.840045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.840083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.840096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.844808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.281 [2024-07-13 06:07:38.844861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.281 [2024-07-13 06:07:38.844875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.281 [2024-07-13 06:07:38.849607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.849645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.849658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.854194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.854232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.854247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.858848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.858885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.858899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.863354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.863405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.863419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.868421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.868473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.868488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.873522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.873574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.873589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.878162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.878210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.878224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.883017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.883055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.883068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.888265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.888333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.888362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.893629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.893680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.893693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.898419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.898456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.898470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.903321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.903359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.903388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.908334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.908398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.908414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.913332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.913410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.913425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.918533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.918570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.918583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.923677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.923744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.923757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.928950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.929018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.929032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.934166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.934226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.934240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.939049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.939086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.939100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.943834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.943884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.943913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.948956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.948992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.949004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.954225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.954263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.954277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.959332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.959396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.959411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.964683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.964781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.964795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.970237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.970274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.970288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.975587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.975636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.975651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.980582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.980644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.980658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.986022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.986073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.986101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.282 [2024-07-13 06:07:38.991531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.282 [2024-07-13 06:07:38.991594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.282 [2024-07-13 06:07:38.991607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.283 [2024-07-13 06:07:38.996995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.283 [2024-07-13 06:07:38.997047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.283 [2024-07-13 06:07:38.997060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.283 [2024-07-13 06:07:39.002316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.283 [2024-07-13 06:07:39.002354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.283 [2024-07-13 06:07:39.002367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.007489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.007552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.007566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.012594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.012644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.012658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.017581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.017617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.017646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.022668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.022704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.022717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.027745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.027781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.027794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.032588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.032639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.032652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.037714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.037779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.037792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.042671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.042722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.042750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.048370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.048449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.048462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.054060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.054111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.054125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.059152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.059203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.059216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.064208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.064259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.064273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.069068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.069120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.069133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.074427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.074470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.074483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.079607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.079656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.079669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.084747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.084814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.084841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.089942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.090026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.090055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.095091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.095127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.095140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.100334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.100397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.100428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.105604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.105654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.105667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.110687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.110738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.110767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.115421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.115467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.115480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.120609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.120644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.120656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.125613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.125663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.125677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.130608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.542 [2024-07-13 06:07:39.130657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.542 [2024-07-13 06:07:39.130669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.542 [2024-07-13 06:07:39.135698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.135749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.135762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.140636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.140671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.140683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.145893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.145990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.146017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.151199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.151251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.151264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.156125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.156208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.156221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.161296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.161333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.161346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.166817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.166883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.166913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.171874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.171926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.171940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.177181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.177247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.177260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.182123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.182227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.182242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.187342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.187407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.187422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.192371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.192416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.192446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.197297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.197366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.197386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.202537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.202603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.202631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.207574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.207610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.207623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.212373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.212434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.212448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.217444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.217492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.217505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.222337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.222389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.222405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.227036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.227087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.227100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.231690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.231726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.231756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.236529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.236578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.236591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.241935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.242016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.242030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.247082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.247149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.247197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.252327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.252379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.252404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.257341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.257414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.257428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.262422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.262465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.262480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.543 [2024-07-13 06:07:39.267538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.543 [2024-07-13 06:07:39.267605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.543 [2024-07-13 06:07:39.267619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.272634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.272669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.272682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.277577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.277675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.277703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.282674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.282741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.282754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.287692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.287728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.287741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.292660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.292712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.292725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.297787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.297839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.297866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.302965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.303017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.303029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.308206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.308272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.308286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.313219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.313317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.313346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.318406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.318442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.318456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.323172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.323223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.323237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.328238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.328273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.328286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.333294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.333330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.333343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.338554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.338605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.338618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.343641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.343691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.343705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.348694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.348761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.348808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.353631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.353686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.353701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.358674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.358710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.358723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.363596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.363661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.363690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.368686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.368738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.368752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.373619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.373670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.373683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.378930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.378982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.378995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.384181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.384232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.384245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.389121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.389157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.389169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.393923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.393975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.393990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.803 [2024-07-13 06:07:39.398660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.803 [2024-07-13 06:07:39.398697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.803 [2024-07-13 06:07:39.398710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.403409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.403458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.403472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.408284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.408325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.408339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.413212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.413264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.413277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.418363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.418409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.418423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.423481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.423530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.423544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.428551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.428601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.428614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.433797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.433848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.433861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.439041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.439109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.439138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.444300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.444351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.444364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.449484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.449574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.449587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.454993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.455044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.455056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.460482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.460559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.460588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.465811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.465847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.465860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.470751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.470803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.470816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.475744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.475780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.475793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.481019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.481072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.481086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.486376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.486424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.486438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.491623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.491675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.491689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.496607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.496657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.496687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.501933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.502001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.502029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.507060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.507112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.507127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.512080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.512118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.512131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.517315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.517368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.517394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.521837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.521873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.521887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:47.804 [2024-07-13 06:07:39.526686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:47.804 [2024-07-13 06:07:39.526723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.804 [2024-07-13 06:07:39.526736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.531602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.531637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.531651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.536484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.536534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.536549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.541065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.541131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.541145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.546039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.546076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.546090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.551001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.551070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.551083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.555630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.555670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.555684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.560714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.560751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.560764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.565806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.565858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.565870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.571323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.571391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.571437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.576561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.576597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.576610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.581690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.581758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.581771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.586852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.586904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.586917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.592011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.592062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.592076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.597317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.597369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.597394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.602688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.602773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.602804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.608053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.608089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.608102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.613217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.613267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.613280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.618315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.618352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.618378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.623467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.623516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.623531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.629015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.629068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.629082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.634511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.634548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.634561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.639675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.639726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.639740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.644937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.644972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.644985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.650167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.650231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.650246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.655443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.655490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.655504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.660504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.660567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.660582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.665752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.665789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.064 [2024-07-13 06:07:39.665802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.064 [2024-07-13 06:07:39.670946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.064 [2024-07-13 06:07:39.670999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.671012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.675843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.675880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.675893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.680965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.681018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.681062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.686029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.686096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.686110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.690934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.690970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.691002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.696191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.696244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.696257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.701281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.701335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.701378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.706689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.706741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.706755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.711851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.711889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.711904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.717046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.717141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.717156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.722258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.722295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.722308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.727155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.727192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.727205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.731783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.731835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.731848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.736651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.736718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.736732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.741549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.741615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.741627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.746371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.746421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.746436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.751633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.751684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.751698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.756408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.756471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.756486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.761628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.761663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.761676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.766664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.766699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.766711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.771712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.771747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.771760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.776751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.776832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.776845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.781787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.781854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.781868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.065 [2024-07-13 06:07:39.787418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.065 [2024-07-13 06:07:39.787463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.065 [2024-07-13 06:07:39.787477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.325 [2024-07-13 06:07:39.792799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.325 [2024-07-13 06:07:39.792836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.325 [2024-07-13 06:07:39.792850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.325 [2024-07-13 06:07:39.798073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.325 [2024-07-13 06:07:39.798124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.325 [2024-07-13 06:07:39.798137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.325 [2024-07-13 06:07:39.803316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.325 [2024-07-13 06:07:39.803352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.325 [2024-07-13 06:07:39.803408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.325 [2024-07-13 06:07:39.808361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.325 [2024-07-13 06:07:39.808422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.325 [2024-07-13 06:07:39.808435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.325 [2024-07-13 06:07:39.813308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.325 [2024-07-13 06:07:39.813360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.325 [2024-07-13 06:07:39.813374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.325 [2024-07-13 06:07:39.818317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.325 [2024-07-13 06:07:39.818355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.325 [2024-07-13 06:07:39.818382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.325 [2024-07-13 06:07:39.823200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.325 [2024-07-13 06:07:39.823251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.325 [2024-07-13 06:07:39.823264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.325 [2024-07-13 06:07:39.828329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.325 [2024-07-13 06:07:39.828406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.325 [2024-07-13 06:07:39.828421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.325 [2024-07-13 06:07:39.833346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.325 [2024-07-13 06:07:39.833391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.325 [2024-07-13 06:07:39.833404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.325 [2024-07-13 06:07:39.838265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.325 [2024-07-13 06:07:39.838302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.325 [2024-07-13 06:07:39.838316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.325 [2024-07-13 06:07:39.843653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.843705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.843732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.848775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.848840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.848853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.853791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.853842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.853855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.858838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.858889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.858917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.864255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.864294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.864308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.869377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.869426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.869440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.874365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.874413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.874426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.879216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.879255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.879268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.884333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.884387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.884402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.889327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.889391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.889407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.894746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.894797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.894809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.900082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.900132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.900161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.905419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.905482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.905497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.910824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.910876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.910889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.916136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.916204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.916232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.921508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.921542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.921554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.926687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.926722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.926752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.931947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.932028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.932041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.937076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.937141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.937154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.942170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.942243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.942258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.947487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.947583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.947598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.953071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.953122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.953136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.958544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.958610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.958638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.964001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.964053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.964066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.969100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.969136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.969149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.974283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.974321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.974335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.979459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.979508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.979522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.984530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.984608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.984637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.326 [2024-07-13 06:07:39.989968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.326 [2024-07-13 06:07:39.990019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.326 [2024-07-13 06:07:39.990032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.327 [2024-07-13 06:07:39.994519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.327 [2024-07-13 06:07:39.994555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.327 [2024-07-13 06:07:39.994568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.327 [2024-07-13 06:07:39.999302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.327 [2024-07-13 06:07:39.999355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.327 [2024-07-13 06:07:39.999370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.327 [2024-07-13 06:07:40.003913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.327 [2024-07-13 06:07:40.003950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.327 [2024-07-13 06:07:40.003963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.327 [2024-07-13 06:07:40.008818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.327 [2024-07-13 06:07:40.008856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.327 [2024-07-13 06:07:40.008870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.327 [2024-07-13 06:07:40.013983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.327 [2024-07-13 06:07:40.014020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.327 [2024-07-13 06:07:40.014034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.327 [2024-07-13 06:07:40.019186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.327 [2024-07-13 06:07:40.019225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.327 [2024-07-13 06:07:40.019238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.327 [2024-07-13 06:07:40.023882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.327 [2024-07-13 06:07:40.023934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.327 [2024-07-13 06:07:40.023962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.327 [2024-07-13 06:07:40.028733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.327 [2024-07-13 06:07:40.028785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.327 [2024-07-13 06:07:40.028798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.327 [2024-07-13 06:07:40.033966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.327 [2024-07-13 06:07:40.034018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.327 [2024-07-13 06:07:40.034033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.327 [2024-07-13 06:07:40.039470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.327 [2024-07-13 06:07:40.039533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.327 [2024-07-13 06:07:40.039547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.327 [2024-07-13 06:07:40.044849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.327 [2024-07-13 06:07:40.044886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.327 [2024-07-13 06:07:40.044899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.327 [2024-07-13 06:07:40.049814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.327 [2024-07-13 06:07:40.049850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.327 [2024-07-13 06:07:40.049864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.586 [2024-07-13 06:07:40.054354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.586 [2024-07-13 06:07:40.054402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.586 [2024-07-13 06:07:40.054416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.586 [2024-07-13 06:07:40.060084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.586 [2024-07-13 06:07:40.060136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.586 [2024-07-13 06:07:40.060150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.586 [2024-07-13 06:07:40.065383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.586 [2024-07-13 06:07:40.065445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.586 [2024-07-13 06:07:40.065459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.586 [2024-07-13 06:07:40.070822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.586 [2024-07-13 06:07:40.070859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.586 [2024-07-13 06:07:40.070872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.586 [2024-07-13 06:07:40.075963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.586 [2024-07-13 06:07:40.076014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.586 [2024-07-13 06:07:40.076027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.586 [2024-07-13 06:07:40.081304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.586 [2024-07-13 06:07:40.081372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.586 [2024-07-13 06:07:40.081413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.586 [2024-07-13 06:07:40.086551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.586 [2024-07-13 06:07:40.086587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.586 [2024-07-13 06:07:40.086602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.586 [2024-07-13 06:07:40.091578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.586 [2024-07-13 06:07:40.091614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.586 [2024-07-13 06:07:40.091627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.586 [2024-07-13 06:07:40.096612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.586 [2024-07-13 06:07:40.096664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.586 [2024-07-13 06:07:40.096678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.586 [2024-07-13 06:07:40.101836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.586 [2024-07-13 06:07:40.101886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.101900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.107241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.107294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.107308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.112434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.112481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.112495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.117427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.117540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.117554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.123017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.123071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.123116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.128313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.128406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.128435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.133498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.133535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.133549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.138514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.138581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.138594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.143737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.143773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.143801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.149144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.149179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.149192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.154351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.154399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.154413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.159110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.159179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.159194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.164159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.164210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.164240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.169223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.169276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.169290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.174453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.174490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.174503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.179812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.179849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.179862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.184530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.184566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.184580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.189230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.189298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.189311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.193787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.193843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.193872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.198848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.198901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.198914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.203637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.203673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.203686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.208531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.208584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.208597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.213369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.213433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.213447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.218362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.218409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.218423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.223206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.223258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.223272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.228351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.228398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.228413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.233561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.233598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.233610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.238660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.238712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.238740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.243702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.587 [2024-07-13 06:07:40.243754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.587 [2024-07-13 06:07:40.243767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.587 [2024-07-13 06:07:40.248458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.588 [2024-07-13 06:07:40.248493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.588 [2024-07-13 06:07:40.248506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.588 [2024-07-13 06:07:40.253481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.588 [2024-07-13 06:07:40.253603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.588 [2024-07-13 06:07:40.253632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.588 [2024-07-13 06:07:40.258472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.588 [2024-07-13 06:07:40.258508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.588 [2024-07-13 06:07:40.258522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.588 [2024-07-13 06:07:40.263637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.588 [2024-07-13 06:07:40.263688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.588 [2024-07-13 06:07:40.263701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.588 [2024-07-13 06:07:40.268863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.588 [2024-07-13 06:07:40.268899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.588 [2024-07-13 06:07:40.268912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.588 [2024-07-13 06:07:40.273958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.588 [2024-07-13 06:07:40.274011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.588 [2024-07-13 06:07:40.274025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.588 [2024-07-13 06:07:40.279298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.588 [2024-07-13 06:07:40.279350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.588 [2024-07-13 06:07:40.279364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.588 [2024-07-13 06:07:40.284276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.588 [2024-07-13 06:07:40.284363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.588 [2024-07-13 06:07:40.284376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.588 [2024-07-13 06:07:40.289652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.588 [2024-07-13 06:07:40.289718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.588 [2024-07-13 06:07:40.289733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.588 [2024-07-13 06:07:40.294620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.588 [2024-07-13 06:07:40.294673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.588 [2024-07-13 06:07:40.294686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.588 [2024-07-13 06:07:40.299790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.588 [2024-07-13 06:07:40.299842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.588 [2024-07-13 06:07:40.299855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.588 [2024-07-13 06:07:40.304951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.588 [2024-07-13 06:07:40.305023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.588 [2024-07-13 06:07:40.305036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.588 [2024-07-13 06:07:40.310508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.588 [2024-07-13 06:07:40.310561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.588 [2024-07-13 06:07:40.310590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.315814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.315866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.315878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.320966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.321017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.321030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.326083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.326135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.326175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.331330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.331409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.331425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.336447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.336570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.336598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.341776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.341811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.341824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.346743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.346812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.346841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.351775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.351811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.351823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.357083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.357119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.357132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.362087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.362142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.362180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.367308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.367376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.367390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.372822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.372873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.372885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.378115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.378151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.378164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.383277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.383315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.383329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.388546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.388581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.388594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.847 [2024-07-13 06:07:40.393675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172da10) 00:20:48.847 [2024-07-13 06:07:40.393710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.847 [2024-07-13 06:07:40.393723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.847 00:20:48.847 Latency(us) 00:20:48.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.847 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:48.847 nvme0n1 : 2.00 6073.61 759.20 0.00 0.00 2629.92 2070.34 7447.27 00:20:48.847 =================================================================================================================== 00:20:48.847 Total : 6073.61 759.20 0.00 0.00 2629.92 2070.34 7447.27 00:20:48.847 0 00:20:48.847 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:48.847 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:48.847 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:48.847 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:48.847 | .driver_specific 00:20:48.847 | .nvme_error 00:20:48.847 | .status_code 00:20:48.847 | .command_transient_transport_error' 00:20:49.106 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 392 > 0 )) 00:20:49.106 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94436 00:20:49.106 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94436 ']' 00:20:49.106 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94436 00:20:49.106 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:49.106 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:49.106 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94436 00:20:49.106 killing process with pid 94436 00:20:49.106 Received shutdown signal, test time was about 2.000000 seconds 00:20:49.106 00:20:49.106 Latency(us) 00:20:49.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.106 =================================================================================================================== 00:20:49.106 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:49.106 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:49.106 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:49.106 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94436' 00:20:49.106 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94436 00:20:49.106 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94436 00:20:49.365 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:49.365 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:49.365 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:49.365 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:49.365 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:49.365 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:49.365 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94488 00:20:49.365 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94488 /var/tmp/bperf.sock 00:20:49.365 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94488 ']' 00:20:49.365 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:49.365 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:49.365 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:49.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:49.365 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:49.365 06:07:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:49.365 [2024-07-13 06:07:40.937851] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:49.365 [2024-07-13 06:07:40.937956] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94488 ] 00:20:49.365 [2024-07-13 06:07:41.074111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.623 [2024-07-13 06:07:41.116454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.623 [2024-07-13 06:07:41.149609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:49.623 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:49.623 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:49.623 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:49.623 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:49.882 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:49.882 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.882 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:49.882 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.882 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:49.882 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:50.141 nvme0n1 00:20:50.141 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:50.141 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.141 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:50.141 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.141 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:50.141 06:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:50.399 Running I/O for 2 seconds... 00:20:50.399 [2024-07-13 06:07:41.952696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190fef90 00:20:50.399 [2024-07-13 06:07:41.955719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.399 [2024-07-13 06:07:41.955759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.399 [2024-07-13 06:07:41.971188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190feb58 00:20:50.399 [2024-07-13 06:07:41.974013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.399 [2024-07-13 06:07:41.974063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:50.399 [2024-07-13 06:07:41.989572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190fe2e8 00:20:50.399 [2024-07-13 06:07:41.992589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.399 [2024-07-13 06:07:41.992638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:50.399 [2024-07-13 06:07:42.008947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190fda78 00:20:50.399 [2024-07-13 06:07:42.011820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.399 [2024-07-13 06:07:42.011853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:50.399 [2024-07-13 06:07:42.027384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190fd208 00:20:50.399 [2024-07-13 06:07:42.030283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.399 [2024-07-13 06:07:42.030316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:50.399 [2024-07-13 06:07:42.046782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190fc998 00:20:50.399 [2024-07-13 06:07:42.049522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.399 [2024-07-13 06:07:42.049570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:50.399 [2024-07-13 06:07:42.065581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190fc128 00:20:50.399 [2024-07-13 06:07:42.068316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.399 [2024-07-13 06:07:42.068392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:50.399 [2024-07-13 06:07:42.084756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190fb8b8 00:20:50.399 [2024-07-13 06:07:42.087455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.399 [2024-07-13 06:07:42.087496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:50.399 [2024-07-13 06:07:42.103604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190fb048 00:20:50.399 [2024-07-13 06:07:42.106997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.399 [2024-07-13 06:07:42.107046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:50.399 [2024-07-13 06:07:42.123532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190fa7d8 00:20:50.400 [2024-07-13 06:07:42.126127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.400 [2024-07-13 06:07:42.126162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:50.658 [2024-07-13 06:07:42.142440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f9f68 00:20:50.658 [2024-07-13 06:07:42.145082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.658 [2024-07-13 06:07:42.145146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:50.658 [2024-07-13 06:07:42.160841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f96f8 00:20:50.658 [2024-07-13 06:07:42.163538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.658 [2024-07-13 06:07:42.163591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:50.658 [2024-07-13 06:07:42.179430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f8e88 00:20:50.658 [2024-07-13 06:07:42.182070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.658 [2024-07-13 06:07:42.182118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:50.658 [2024-07-13 06:07:42.197950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f8618 00:20:50.658 [2024-07-13 06:07:42.200471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.658 [2024-07-13 06:07:42.200510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:50.658 [2024-07-13 06:07:42.215655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f7da8 00:20:50.658 [2024-07-13 06:07:42.218335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.658 [2024-07-13 06:07:42.218379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:50.658 [2024-07-13 06:07:42.233865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f7538 00:20:50.658 [2024-07-13 06:07:42.236489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.658 [2024-07-13 06:07:42.236542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:50.658 [2024-07-13 06:07:42.252541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f6cc8 00:20:50.658 [2024-07-13 06:07:42.255196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.658 [2024-07-13 06:07:42.255230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:50.658 [2024-07-13 06:07:42.270853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f6458 00:20:50.658 [2024-07-13 06:07:42.273378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.658 [2024-07-13 06:07:42.273416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:50.658 [2024-07-13 06:07:42.289255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f5be8 00:20:50.658 [2024-07-13 06:07:42.291745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.658 [2024-07-13 06:07:42.291779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:50.658 [2024-07-13 06:07:42.307681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f5378 00:20:50.658 [2024-07-13 06:07:42.310425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.658 [2024-07-13 06:07:42.310459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:50.658 [2024-07-13 06:07:42.326420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f4b08 00:20:50.658 [2024-07-13 06:07:42.329205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.658 [2024-07-13 06:07:42.329254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:50.658 [2024-07-13 06:07:42.345806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f4298 00:20:50.658 [2024-07-13 06:07:42.348348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.658 [2024-07-13 06:07:42.348394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:50.658 [2024-07-13 06:07:42.364060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f3a28 00:20:50.658 [2024-07-13 06:07:42.366491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.658 [2024-07-13 06:07:42.366571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:50.658 [2024-07-13 06:07:42.382281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f31b8 00:20:50.658 [2024-07-13 06:07:42.384688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.658 [2024-07-13 06:07:42.384736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:50.917 [2024-07-13 06:07:42.400947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f2948 00:20:50.917 [2024-07-13 06:07:42.403454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.917 [2024-07-13 06:07:42.403509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:50.917 [2024-07-13 06:07:42.419486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f20d8 00:20:50.917 [2024-07-13 06:07:42.421817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.917 [2024-07-13 06:07:42.421871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:50.917 [2024-07-13 06:07:42.438057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f1868 00:20:50.917 [2024-07-13 06:07:42.440393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.917 [2024-07-13 06:07:42.440437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:50.917 [2024-07-13 06:07:42.456429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f0ff8 00:20:50.917 [2024-07-13 06:07:42.458608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.917 [2024-07-13 06:07:42.458642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:50.917 [2024-07-13 06:07:42.473749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f0788 00:20:50.917 [2024-07-13 06:07:42.475827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.917 [2024-07-13 06:07:42.475892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:50.917 [2024-07-13 06:07:42.492305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190eff18 00:20:50.917 [2024-07-13 06:07:42.494702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.917 [2024-07-13 06:07:42.494782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:50.917 [2024-07-13 06:07:42.510652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190ef6a8 00:20:50.917 [2024-07-13 06:07:42.512868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.917 [2024-07-13 06:07:42.512901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:50.917 [2024-07-13 06:07:42.529036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190eee38 00:20:50.917 [2024-07-13 06:07:42.531077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.917 [2024-07-13 06:07:42.531126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:50.917 [2024-07-13 06:07:42.546970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190ee5c8 00:20:50.917 [2024-07-13 06:07:42.549118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.917 [2024-07-13 06:07:42.549165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:50.917 [2024-07-13 06:07:42.564782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190edd58 00:20:50.917 [2024-07-13 06:07:42.566937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.917 [2024-07-13 06:07:42.566985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:50.917 [2024-07-13 06:07:42.582904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190ed4e8 00:20:50.917 [2024-07-13 06:07:42.584876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.917 [2024-07-13 06:07:42.584910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:50.917 [2024-07-13 06:07:42.600561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190ecc78 00:20:50.917 [2024-07-13 06:07:42.602655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.917 [2024-07-13 06:07:42.602702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:50.917 [2024-07-13 06:07:42.618492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190ec408 00:20:50.917 [2024-07-13 06:07:42.620454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.917 [2024-07-13 06:07:42.620505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:50.917 [2024-07-13 06:07:42.636054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190ebb98 00:20:50.917 [2024-07-13 06:07:42.637924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.917 [2024-07-13 06:07:42.637958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:51.176 [2024-07-13 06:07:42.654141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190eb328 00:20:51.176 [2024-07-13 06:07:42.656402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.176 [2024-07-13 06:07:42.656470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:51.176 [2024-07-13 06:07:42.672911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190eaab8 00:20:51.176 [2024-07-13 06:07:42.674959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.176 [2024-07-13 06:07:42.675007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:51.176 [2024-07-13 06:07:42.691237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190ea248 00:20:51.176 [2024-07-13 06:07:42.693341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.176 [2024-07-13 06:07:42.693398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:51.176 [2024-07-13 06:07:42.709606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e99d8 00:20:51.176 [2024-07-13 06:07:42.711655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.176 [2024-07-13 06:07:42.711702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:51.176 [2024-07-13 06:07:42.727877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e9168 00:20:51.176 [2024-07-13 06:07:42.729994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.176 [2024-07-13 06:07:42.730027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:51.176 [2024-07-13 06:07:42.747142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e88f8 00:20:51.176 [2024-07-13 06:07:42.748994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.176 [2024-07-13 06:07:42.749026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:51.176 [2024-07-13 06:07:42.765222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e8088 00:20:51.176 [2024-07-13 06:07:42.767237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.176 [2024-07-13 06:07:42.767284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:51.176 [2024-07-13 06:07:42.783859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e7818 00:20:51.176 [2024-07-13 06:07:42.785795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.176 [2024-07-13 06:07:42.785856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:51.176 [2024-07-13 06:07:42.802648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e6fa8 00:20:51.176 [2024-07-13 06:07:42.804754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.176 [2024-07-13 06:07:42.804821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:51.176 [2024-07-13 06:07:42.821953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e6738 00:20:51.177 [2024-07-13 06:07:42.823785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.177 [2024-07-13 06:07:42.823848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:51.177 [2024-07-13 06:07:42.840715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e5ec8 00:20:51.177 [2024-07-13 06:07:42.842681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.177 [2024-07-13 06:07:42.842744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:51.177 [2024-07-13 06:07:42.859106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e5658 00:20:51.177 [2024-07-13 06:07:42.860912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.177 [2024-07-13 06:07:42.860945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:51.177 [2024-07-13 06:07:42.877603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e4de8 00:20:51.177 [2024-07-13 06:07:42.879457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.177 [2024-07-13 06:07:42.879519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:51.177 [2024-07-13 06:07:42.895688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e4578 00:20:51.177 [2024-07-13 06:07:42.897556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.177 [2024-07-13 06:07:42.897604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:51.435 [2024-07-13 06:07:42.914397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e3d08 00:20:51.435 [2024-07-13 06:07:42.916302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.435 [2024-07-13 06:07:42.916351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:51.435 [2024-07-13 06:07:42.933166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e3498 00:20:51.435 [2024-07-13 06:07:42.934778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.435 [2024-07-13 06:07:42.934812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:51.435 [2024-07-13 06:07:42.952066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e2c28 00:20:51.435 [2024-07-13 06:07:42.953683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.435 [2024-07-13 06:07:42.953746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:51.435 [2024-07-13 06:07:42.970859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e23b8 00:20:51.435 [2024-07-13 06:07:42.972580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.435 [2024-07-13 06:07:42.972614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:51.435 [2024-07-13 06:07:42.989473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e1b48 00:20:51.435 [2024-07-13 06:07:42.991288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.435 [2024-07-13 06:07:42.991351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:51.435 [2024-07-13 06:07:43.008335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e12d8 00:20:51.435 [2024-07-13 06:07:43.010037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.435 [2024-07-13 06:07:43.010085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:51.435 [2024-07-13 06:07:43.026938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e0a68 00:20:51.435 [2024-07-13 06:07:43.028494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.435 [2024-07-13 06:07:43.028540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:51.435 [2024-07-13 06:07:43.044978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e01f8 00:20:51.435 [2024-07-13 06:07:43.046592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.435 [2024-07-13 06:07:43.046641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:51.435 [2024-07-13 06:07:43.063814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190df988 00:20:51.435 [2024-07-13 06:07:43.065303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.435 [2024-07-13 06:07:43.065336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:51.435 [2024-07-13 06:07:43.081624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190df118 00:20:51.435 [2024-07-13 06:07:43.083174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.435 [2024-07-13 06:07:43.083221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:51.435 [2024-07-13 06:07:43.100206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190de8a8 00:20:51.435 [2024-07-13 06:07:43.101762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.435 [2024-07-13 06:07:43.101809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:51.435 [2024-07-13 06:07:43.119408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190de038 00:20:51.435 [2024-07-13 06:07:43.120917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.435 [2024-07-13 06:07:43.120949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:51.435 [2024-07-13 06:07:43.145841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190de038 00:20:51.435 [2024-07-13 06:07:43.148863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.435 [2024-07-13 06:07:43.148912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.694 [2024-07-13 06:07:43.164789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190de8a8 00:20:51.694 [2024-07-13 06:07:43.167927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.694 [2024-07-13 06:07:43.167959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:51.694 [2024-07-13 06:07:43.183721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190df118 00:20:51.694 [2024-07-13 06:07:43.186567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.694 [2024-07-13 06:07:43.186616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:51.694 [2024-07-13 06:07:43.202421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190df988 00:20:51.694 [2024-07-13 06:07:43.205320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.694 [2024-07-13 06:07:43.205369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:51.694 [2024-07-13 06:07:43.221012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e01f8 00:20:51.694 [2024-07-13 06:07:43.223953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.694 [2024-07-13 06:07:43.223985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:51.694 [2024-07-13 06:07:43.239958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e0a68 00:20:51.694 [2024-07-13 06:07:43.242787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.694 [2024-07-13 06:07:43.242836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:51.694 [2024-07-13 06:07:43.258145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e12d8 00:20:51.694 [2024-07-13 06:07:43.260778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.694 [2024-07-13 06:07:43.260842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:51.694 [2024-07-13 06:07:43.276195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e1b48 00:20:51.694 [2024-07-13 06:07:43.278890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.694 [2024-07-13 06:07:43.278926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:51.694 [2024-07-13 06:07:43.295301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e23b8 00:20:51.694 [2024-07-13 06:07:43.298197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.694 [2024-07-13 06:07:43.298246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:51.694 [2024-07-13 06:07:43.314639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e2c28 00:20:51.694 [2024-07-13 06:07:43.317349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.694 [2024-07-13 06:07:43.317422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:51.694 [2024-07-13 06:07:43.333930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e3498 00:20:51.694 [2024-07-13 06:07:43.336591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.694 [2024-07-13 06:07:43.336640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:51.694 [2024-07-13 06:07:43.353320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e3d08 00:20:51.694 [2024-07-13 06:07:43.356213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.694 [2024-07-13 06:07:43.356278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:51.694 [2024-07-13 06:07:43.372921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e4578 00:20:51.694 [2024-07-13 06:07:43.375712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.694 [2024-07-13 06:07:43.375760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:51.694 [2024-07-13 06:07:43.392261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e4de8 00:20:51.694 [2024-07-13 06:07:43.394822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.694 [2024-07-13 06:07:43.394871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:51.694 [2024-07-13 06:07:43.411648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e5658 00:20:51.694 [2024-07-13 06:07:43.414301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.694 [2024-07-13 06:07:43.414335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.429693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e5ec8 00:20:51.975 [2024-07-13 06:07:43.432261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.432308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.447517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e6738 00:20:51.975 [2024-07-13 06:07:43.449956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.450005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.465349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e6fa8 00:20:51.975 [2024-07-13 06:07:43.467911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.467960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.483987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e7818 00:20:51.975 [2024-07-13 06:07:43.486498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.486532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.502473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e8088 00:20:51.975 [2024-07-13 06:07:43.504954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.505015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.521494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e88f8 00:20:51.975 [2024-07-13 06:07:43.523983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.524015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.540021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e9168 00:20:51.975 [2024-07-13 06:07:43.542482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.542516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.558499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190e99d8 00:20:51.975 [2024-07-13 06:07:43.560886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.560921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.576788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190ea248 00:20:51.975 [2024-07-13 06:07:43.579089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.579137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.595434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190eaab8 00:20:51.975 [2024-07-13 06:07:43.597913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.597962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.614504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190eb328 00:20:51.975 [2024-07-13 06:07:43.616893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.616941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.632854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190ebb98 00:20:51.975 [2024-07-13 06:07:43.635227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.635275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.651469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190ec408 00:20:51.975 [2024-07-13 06:07:43.653882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.653930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.670416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190ecc78 00:20:51.975 [2024-07-13 06:07:43.672637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.672685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:51.975 [2024-07-13 06:07:43.689490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190ed4e8 00:20:51.975 [2024-07-13 06:07:43.691868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.975 [2024-07-13 06:07:43.691916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:52.234 [2024-07-13 06:07:43.708272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190edd58 00:20:52.234 [2024-07-13 06:07:43.710553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.234 [2024-07-13 06:07:43.710585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:52.234 [2024-07-13 06:07:43.727613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190ee5c8 00:20:52.234 [2024-07-13 06:07:43.729751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.234 [2024-07-13 06:07:43.729814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.234 [2024-07-13 06:07:43.746116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190eee38 00:20:52.234 [2024-07-13 06:07:43.748350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.234 [2024-07-13 06:07:43.748437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:52.234 [2024-07-13 06:07:43.765995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190ef6a8 00:20:52.234 [2024-07-13 06:07:43.768062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.234 [2024-07-13 06:07:43.768098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:52.234 [2024-07-13 06:07:43.784996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190eff18 00:20:52.234 [2024-07-13 06:07:43.787099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.234 [2024-07-13 06:07:43.787146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:52.234 [2024-07-13 06:07:43.804617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f0788 00:20:52.234 [2024-07-13 06:07:43.806795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.234 [2024-07-13 06:07:43.806845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:52.234 [2024-07-13 06:07:43.822937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f0ff8 00:20:52.234 [2024-07-13 06:07:43.824909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.234 [2024-07-13 06:07:43.824942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:52.234 [2024-07-13 06:07:43.841068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f1868 00:20:52.234 [2024-07-13 06:07:43.843065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.234 [2024-07-13 06:07:43.843113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:52.234 [2024-07-13 06:07:43.858876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f20d8 00:20:52.234 [2024-07-13 06:07:43.860779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.234 [2024-07-13 06:07:43.860827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:52.234 [2024-07-13 06:07:43.876695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f2948 00:20:52.234 [2024-07-13 06:07:43.878726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.234 [2024-07-13 06:07:43.878772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:52.234 [2024-07-13 06:07:43.894791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f31b8 00:20:52.234 [2024-07-13 06:07:43.896752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.234 [2024-07-13 06:07:43.896798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:52.234 [2024-07-13 06:07:43.913823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1021240) with pdu=0x2000190f3a28 00:20:52.234 [2024-07-13 06:07:43.915779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.234 [2024-07-13 06:07:43.915841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:52.234 00:20:52.234 Latency(us) 00:20:52.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.234 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:52.234 nvme0n1 : 2.00 13575.22 53.03 0.00 0.00 9420.24 8221.79 35508.60 00:20:52.234 =================================================================================================================== 00:20:52.234 Total : 13575.22 53.03 0.00 0.00 9420.24 8221.79 35508.60 00:20:52.234 0 00:20:52.234 06:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:52.234 06:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:52.234 06:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:52.234 06:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:52.234 | .driver_specific 00:20:52.234 | .nvme_error 00:20:52.234 | .status_code 00:20:52.234 | .command_transient_transport_error' 00:20:52.494 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 106 > 0 )) 00:20:52.494 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94488 00:20:52.494 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94488 ']' 00:20:52.494 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94488 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94488 00:20:52.752 killing process with pid 94488 00:20:52.752 Received shutdown signal, test time was about 2.000000 seconds 00:20:52.752 00:20:52.752 Latency(us) 00:20:52.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.752 =================================================================================================================== 00:20:52.752 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94488' 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94488 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94488 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94531 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94531 /var/tmp/bperf.sock 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94531 ']' 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:52.752 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:52.753 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:52.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:52.753 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:52.753 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:52.753 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:52.753 Zero copy mechanism will not be used. 00:20:52.753 [2024-07-13 06:07:44.464207] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:52.753 [2024-07-13 06:07:44.464307] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94531 ] 00:20:53.010 [2024-07-13 06:07:44.601536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.010 [2024-07-13 06:07:44.642729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.010 [2024-07-13 06:07:44.675344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:53.010 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:53.010 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:53.010 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:53.010 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:53.268 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:53.269 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.269 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:53.269 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.269 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:53.269 06:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:53.837 nvme0n1 00:20:53.837 06:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:53.837 06:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.837 06:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:53.837 06:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.837 06:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:53.837 06:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:53.837 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:53.837 Zero copy mechanism will not be used. 00:20:53.837 Running I/O for 2 seconds... 00:20:53.837 [2024-07-13 06:07:45.424495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.837 [2024-07-13 06:07:45.424850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.837 [2024-07-13 06:07:45.424910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.837 [2024-07-13 06:07:45.430426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.837 [2024-07-13 06:07:45.430803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.837 [2024-07-13 06:07:45.430830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.837 [2024-07-13 06:07:45.436457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.837 [2024-07-13 06:07:45.436851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.837 [2024-07-13 06:07:45.436881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.837 [2024-07-13 06:07:45.442521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.837 [2024-07-13 06:07:45.442914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.837 [2024-07-13 06:07:45.442994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.837 [2024-07-13 06:07:45.448942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.837 [2024-07-13 06:07:45.449280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.837 [2024-07-13 06:07:45.449310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.837 [2024-07-13 06:07:45.455131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.837 [2024-07-13 06:07:45.455522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.837 [2024-07-13 06:07:45.455565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.837 [2024-07-13 06:07:45.461275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.837 [2024-07-13 06:07:45.461630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.837 [2024-07-13 06:07:45.461673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.837 [2024-07-13 06:07:45.467382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.837 [2024-07-13 06:07:45.467780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.837 [2024-07-13 06:07:45.467811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.837 [2024-07-13 06:07:45.473247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.837 [2024-07-13 06:07:45.473652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.837 [2024-07-13 06:07:45.473687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.837 [2024-07-13 06:07:45.479539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.837 [2024-07-13 06:07:45.479882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.837 [2024-07-13 06:07:45.479919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.837 [2024-07-13 06:07:45.485675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.837 [2024-07-13 06:07:45.486157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.837 [2024-07-13 06:07:45.486187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.837 [2024-07-13 06:07:45.492171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.837 [2024-07-13 06:07:45.492564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.838 [2024-07-13 06:07:45.492634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.838 [2024-07-13 06:07:45.498722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.838 [2024-07-13 06:07:45.499136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.838 [2024-07-13 06:07:45.499181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.838 [2024-07-13 06:07:45.505261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.838 [2024-07-13 06:07:45.505621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.838 [2024-07-13 06:07:45.505652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.838 [2024-07-13 06:07:45.511631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.838 [2024-07-13 06:07:45.512065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.838 [2024-07-13 06:07:45.512094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.838 [2024-07-13 06:07:45.517998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.838 [2024-07-13 06:07:45.518346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.838 [2024-07-13 06:07:45.518394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.838 [2024-07-13 06:07:45.523702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.838 [2024-07-13 06:07:45.524023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.838 [2024-07-13 06:07:45.524053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.838 [2024-07-13 06:07:45.529553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.838 [2024-07-13 06:07:45.529863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.838 [2024-07-13 06:07:45.529894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.838 [2024-07-13 06:07:45.535451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.838 [2024-07-13 06:07:45.535787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.838 [2024-07-13 06:07:45.535824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.838 [2024-07-13 06:07:45.541587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.838 [2024-07-13 06:07:45.541959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.838 [2024-07-13 06:07:45.541989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.838 [2024-07-13 06:07:45.548096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.838 [2024-07-13 06:07:45.548488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.838 [2024-07-13 06:07:45.548528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.838 [2024-07-13 06:07:45.554454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.838 [2024-07-13 06:07:45.554828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.838 [2024-07-13 06:07:45.554873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.838 [2024-07-13 06:07:45.560751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:53.838 [2024-07-13 06:07:45.561106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.838 [2024-07-13 06:07:45.561180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.097 [2024-07-13 06:07:45.567149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.567492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.567533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.573421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.573936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.573966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.579631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.579943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.579973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.586044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.586422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.586452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.592570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.592914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.592944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.599115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.599513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.599578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.605807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.606369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.606407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.612697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.613081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.613130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.619218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.619625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.619655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.625794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.626186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.626241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.632352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.632752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.632789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.638747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.639145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.639210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.645147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.645512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.645584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.651367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.651794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.651860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.657574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.657887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.657916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.663700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.664063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.664092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.669744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.670063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.670092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.676241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.676648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.676684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.682810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.683146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.683176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.689015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.689358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.689419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.695046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.695376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.695416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.701476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.701896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.701925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.707720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.708064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.708095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.713702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.714081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.714127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.719581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.719929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.719973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.725473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.725782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.725811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.731132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.731460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.731489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.737273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.737692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.737724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.098 [2024-07-13 06:07:45.743537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.098 [2024-07-13 06:07:45.743887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.098 [2024-07-13 06:07:45.743917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.099 [2024-07-13 06:07:45.749937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.099 [2024-07-13 06:07:45.750331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-07-13 06:07:45.750362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.099 [2024-07-13 06:07:45.755894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.099 [2024-07-13 06:07:45.756264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-07-13 06:07:45.756294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.099 [2024-07-13 06:07:45.762133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.099 [2024-07-13 06:07:45.762482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-07-13 06:07:45.762512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.099 [2024-07-13 06:07:45.768474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.099 [2024-07-13 06:07:45.768908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-07-13 06:07:45.768945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.099 [2024-07-13 06:07:45.774309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.099 [2024-07-13 06:07:45.774640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-07-13 06:07:45.774675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.099 [2024-07-13 06:07:45.780629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.099 [2024-07-13 06:07:45.781023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-07-13 06:07:45.781053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.099 [2024-07-13 06:07:45.786678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.099 [2024-07-13 06:07:45.787053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-07-13 06:07:45.787084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.099 [2024-07-13 06:07:45.792711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.099 [2024-07-13 06:07:45.793048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-07-13 06:07:45.793078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.099 [2024-07-13 06:07:45.798994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.099 [2024-07-13 06:07:45.799332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-07-13 06:07:45.799362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.099 [2024-07-13 06:07:45.805428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.099 [2024-07-13 06:07:45.805797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-07-13 06:07:45.805832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.099 [2024-07-13 06:07:45.811579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.099 [2024-07-13 06:07:45.811888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-07-13 06:07:45.811918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.099 [2024-07-13 06:07:45.817891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.099 [2024-07-13 06:07:45.818272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-07-13 06:07:45.818302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.099 [2024-07-13 06:07:45.823700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.824101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.824131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.829759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.830096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.830127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.835989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.836300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.836331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.842201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.842541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.842571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.848087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.848483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.848523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.854843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.855229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.855258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.861145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.861553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.861595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.867367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.867772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.867803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.873559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.873868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.873898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.879662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.879965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.879990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.885587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.885894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.885925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.891346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.891726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.891791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.897648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.897979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.898054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.904130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.904512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.904583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.910842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.911276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.911321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.917408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.917773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.917803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.923743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.924156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.924185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.929844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.930223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.930254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.936064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.936452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.936507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.941987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.942314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.942344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.947758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.948149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.948178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.953867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.954248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.954278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.959947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.960360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.960398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.966277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.966602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.966632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.972946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.973322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.973352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.978917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.359 [2024-07-13 06:07:45.979258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.359 [2024-07-13 06:07:45.979307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.359 [2024-07-13 06:07:45.985194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:45.985559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:45.985588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:45.991348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:45.991738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:45.991778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:45.997830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:45.998223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:45.998260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:46.003906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:46.004242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:46.004272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:46.009977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:46.010300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:46.010330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:46.016178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:46.016490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:46.016531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:46.022539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:46.022946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:46.022974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:46.028913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:46.029329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:46.029359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:46.035295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:46.035708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:46.035773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:46.041908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:46.042338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:46.042379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:46.048464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:46.048902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:46.048932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:46.054878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:46.055261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:46.055291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:46.061520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:46.061910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:46.061954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:46.068039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:46.068474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:46.068527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:46.074723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:46.075151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:46.075180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.360 [2024-07-13 06:07:46.081089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.360 [2024-07-13 06:07:46.081428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.360 [2024-07-13 06:07:46.081471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.620 [2024-07-13 06:07:46.087240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.620 [2024-07-13 06:07:46.087635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.620 [2024-07-13 06:07:46.087665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.620 [2024-07-13 06:07:46.093680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.620 [2024-07-13 06:07:46.094060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.620 [2024-07-13 06:07:46.094089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.620 [2024-07-13 06:07:46.100202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.620 [2024-07-13 06:07:46.100601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.620 [2024-07-13 06:07:46.100636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.620 [2024-07-13 06:07:46.106578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.620 [2024-07-13 06:07:46.106997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.107043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.113081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.113469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.113507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.119588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.119953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.119988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.125806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.126230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.126260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.131889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.132249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.132279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.138080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.138448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.138478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.144175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.144557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.144586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.150530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.150883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.150912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.157053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.157454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.157494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.163403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.163850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.163880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.169733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.170119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.170148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.175997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.176412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.176482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.182356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.182683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.182717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.188485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.188823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.188853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.194879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.195283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.195312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.201264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.201651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.201726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.208502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.208828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.208859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.214632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.214944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.214973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.220475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.220842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.220901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.227022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.227440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.227497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.233127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.233536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.233577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.621 [2024-07-13 06:07:46.239418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.621 [2024-07-13 06:07:46.239835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.621 [2024-07-13 06:07:46.239866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.245867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.246232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.246262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.251958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.252298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.252328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.258044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.258385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.258416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.264454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.264819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.264850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.270737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.271046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.271076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.276459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.276835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.276879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.282688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.283070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.283100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.289375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.289752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.289787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.295320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.295674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.295704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.301210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.301614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.301647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.307442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.307846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.307882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.313672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.314014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.314044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.319896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.320262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.320286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.326604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.327140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.327354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.333344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.333680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.333713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.339430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.339788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.339820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.622 [2024-07-13 06:07:46.345489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.622 [2024-07-13 06:07:46.345813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.622 [2024-07-13 06:07:46.345844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.351771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.352109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.352139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.358002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.358401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.358431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.364540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.365013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.365062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.371018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.371384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.371425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.377271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.377692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.377728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.383505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.383856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.383915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.390047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.390478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.390509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.396458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.396923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.396983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.402754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.403093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.403122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.408810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.409222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.409250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.414912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.415341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.415412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.421192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.421593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.421623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.427766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.428130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.428154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.434463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.434792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.434821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.440342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.440733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.440763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.446562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.446927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.446956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.452845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.453208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.453237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.459360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.459748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.459785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.465751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.466157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.466187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.472315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.472777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.472821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.478436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.478760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.478789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.484261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.484684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.484718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.883 [2024-07-13 06:07:46.490431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.883 [2024-07-13 06:07:46.490756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.883 [2024-07-13 06:07:46.490785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.496577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.496910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.496938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.502967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.503362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.503418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.509394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.509805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.509835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.515454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.515855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.515886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.521899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.522321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.522350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.528666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.529047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.529075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.535277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.535702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.535738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.541835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.542259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.542289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.548000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.548335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.548401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.553934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.554326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.554357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.560069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.560488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.560529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.566407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.566720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.566750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.572775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.573085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.573116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.579008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.579413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.579454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.585327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.585805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.585846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.592133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.592523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.592564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.598471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.598848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.598878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.884 [2024-07-13 06:07:46.604889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:54.884 [2024-07-13 06:07:46.605219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.884 [2024-07-13 06:07:46.605249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.611516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.611882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.611911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.618150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.618505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.618536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.624517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.624970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.625006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.631014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.631394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.631450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.637208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.637634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.637668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.643240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.643601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.643635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.649288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.649686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.649722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.655863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.656235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.656263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.662045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.662416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.662446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.668163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.668555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.668609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.674522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.674906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.674937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.680828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.681177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.681207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.686793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.687122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.687151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.692957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.693283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.693313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.698869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.699258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.699287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.705031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.705420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.705488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.711373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.711713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.711751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.717542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.717941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.717969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.724202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.724626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.724675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.730423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.730814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.730857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.736491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.736861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.736890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.742583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.742956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.742986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.144 [2024-07-13 06:07:46.748641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.144 [2024-07-13 06:07:46.749018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.144 [2024-07-13 06:07:46.749047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.754675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.755061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.755105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.761078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.761500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.761540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.767531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.767949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.767994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.773913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.774354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.774395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.780451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.780865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.780897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.786598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.786948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.786982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.792546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.792876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.792921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.798525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.798931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.798994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.804678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.805085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.805113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.811125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.811558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.811630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.817446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.817819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.817854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.823439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.823792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.823828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.829414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.829865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.829896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.835940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.836375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.836412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.842448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.842807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.842835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.849019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.849328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.849358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.855326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.855752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.855782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.861590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.861994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.862023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.145 [2024-07-13 06:07:46.867721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.145 [2024-07-13 06:07:46.868116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.145 [2024-07-13 06:07:46.868146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.873844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.874219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.874249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.880059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.880464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.880535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.886414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.886790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.886834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.892915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.893273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.893302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.899165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.899554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.899627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.905488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.905815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.905856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.911535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.911911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.911940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.917443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.917763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.917792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.923598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.923981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.924040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.929739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.930095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.930123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.936044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.936457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.936543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.942543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.942851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.942880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.948970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.949355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.949408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.955304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.955703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.955737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.961427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.961780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.961810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.967171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.967555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.967589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.973524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.973860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.973921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.979824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.980182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.980224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.986255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.986579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.986615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.992185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.992542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.992567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:46.998084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:46.998434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-13 06:07:46.998464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.405 [2024-07-13 06:07:47.003970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.405 [2024-07-13 06:07:47.004290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.004321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.009807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.010226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.010257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.015945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.016285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.016315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.021950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.022286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.022316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.027478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.027792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.027821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.033116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.033427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.033468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.039214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.039534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.039568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.045504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.045927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.045958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.051495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.051846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.051876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.057358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.057697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.057731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.063501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.063845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.063874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.069259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.069685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.069719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.075460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.075806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.075842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.081618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.081986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.082015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.087694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.088001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.088031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.093638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.093937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.093983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.099518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.099855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.099900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.105289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.105609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.105643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.111310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.111650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.111683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.117378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.117728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.117763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.123223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.123552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.123582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.406 [2024-07-13 06:07:47.129032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.406 [2024-07-13 06:07:47.129355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-13 06:07:47.129397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.134723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.135107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.135159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.140784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.141100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.141130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.146544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.146882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.146911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.152533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.152913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.152947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.158362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.158716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.158745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.164183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.164581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.164615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.170275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.170595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.170629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.176677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.177095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.177126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.182831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.183120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.183180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.188805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.189152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.189197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.195127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.195554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.195598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.201502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.201875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.201909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.207533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.207908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.207952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.213567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.213945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.213973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.219737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.220068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.220112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.225878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.226234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.226264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.232033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.232399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.232455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.238078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.238444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.238479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.244018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.244477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.244517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.250488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.250847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.250877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.256767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.257167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.257196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.263131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.263528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.263612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.666 [2024-07-13 06:07:47.269411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.666 [2024-07-13 06:07:47.269814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.666 [2024-07-13 06:07:47.269850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.275639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.276027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.276086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.281736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.282137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.282167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.287917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.288288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.288347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.294470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.294825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.294870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.300826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.301218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.301247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.307104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.307519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.307559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.313556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.313912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.313946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.319750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.320132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.320175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.326188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.326539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.326581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.332749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.333082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.333147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.339109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.339432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.339472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.345668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.346034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.346077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.351821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.352160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.352189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.358351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.358679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.358714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.364568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.364971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.365000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.370613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.370984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.371014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.376563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.376919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.376949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.382285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.382626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.382661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.667 [2024-07-13 06:07:47.388661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.667 [2024-07-13 06:07:47.388987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.667 [2024-07-13 06:07:47.389017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.926 [2024-07-13 06:07:47.394412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.926 [2024-07-13 06:07:47.394773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.926 [2024-07-13 06:07:47.394802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:55.926 [2024-07-13 06:07:47.400682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.926 [2024-07-13 06:07:47.401076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.926 [2024-07-13 06:07:47.401122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:55.926 [2024-07-13 06:07:47.407197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.926 [2024-07-13 06:07:47.407573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.926 [2024-07-13 06:07:47.407637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:55.926 [2024-07-13 06:07:47.412839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10ed710) with pdu=0x2000190fef90 00:20:55.926 [2024-07-13 06:07:47.413055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.926 [2024-07-13 06:07:47.413115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.926 00:20:55.926 Latency(us) 00:20:55.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.926 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:55.926 nvme0n1 : 2.00 4957.74 619.72 0.00 0.00 3219.46 2546.97 10128.29 00:20:55.926 =================================================================================================================== 00:20:55.926 Total : 4957.74 619.72 0.00 0.00 3219.46 2546.97 10128.29 00:20:55.926 0 00:20:55.926 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:55.926 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:55.926 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:55.926 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:55.926 | .driver_specific 00:20:55.926 | .nvme_error 00:20:55.926 | .status_code 00:20:55.926 | .command_transient_transport_error' 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 320 > 0 )) 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94531 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94531 ']' 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94531 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94531 00:20:56.195 killing process with pid 94531 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94531' 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94531 00:20:56.195 Received shutdown signal, test time was about 2.000000 seconds 00:20:56.195 00:20:56.195 Latency(us) 00:20:56.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.195 =================================================================================================================== 00:20:56.195 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94531 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94367 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94367 ']' 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94367 00:20:56.195 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:56.476 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.476 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94367 00:20:56.476 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:56.476 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:56.476 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94367' 00:20:56.476 killing process with pid 94367 00:20:56.476 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94367 00:20:56.476 06:07:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94367 00:20:56.476 ************************************ 00:20:56.476 END TEST nvmf_digest_error 00:20:56.476 ************************************ 00:20:56.476 00:20:56.476 real 0m14.810s 00:20:56.476 user 0m28.519s 00:20:56.476 sys 0m4.532s 00:20:56.476 06:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:56.476 06:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:56.476 06:07:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:20:56.476 06:07:48 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:56.476 06:07:48 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:56.476 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:56.476 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:20:56.734 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.734 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:20:56.734 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.734 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.734 rmmod nvme_tcp 00:20:56.734 rmmod nvme_fabrics 00:20:56.734 rmmod nvme_keyring 00:20:56.734 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:56.734 Process with pid 94367 is not found 00:20:56.734 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:20:56.734 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:20:56.734 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 94367 ']' 00:20:56.734 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 94367 00:20:56.734 06:07:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 94367 ']' 00:20:56.734 06:07:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 94367 00:20:56.735 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (94367) - No such process 00:20:56.735 06:07:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 94367 is not found' 00:20:56.735 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:56.735 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:56.735 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:56.735 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:56.735 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:56.735 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.735 06:07:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.735 06:07:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.735 06:07:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:56.735 ************************************ 00:20:56.735 END TEST nvmf_digest 00:20:56.735 ************************************ 00:20:56.735 00:20:56.735 real 0m30.501s 00:20:56.735 user 0m57.722s 00:20:56.735 sys 0m9.121s 00:20:56.735 06:07:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:56.735 06:07:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:56.735 06:07:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:56.735 06:07:48 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:20:56.735 06:07:48 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:20:56.735 06:07:48 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:56.735 06:07:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:56.735 06:07:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:56.735 06:07:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:56.735 ************************************ 00:20:56.735 START TEST nvmf_host_multipath 00:20:56.735 ************************************ 00:20:56.735 06:07:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:56.994 * Looking for test storage... 00:20:56.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:56.994 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:56.995 Cannot find device "nvmf_tgt_br" 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:56.995 Cannot find device "nvmf_tgt_br2" 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:56.995 Cannot find device "nvmf_tgt_br" 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:56.995 Cannot find device "nvmf_tgt_br2" 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:56.995 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:56.995 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:56.995 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:57.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:20:57.254 00:20:57.254 --- 10.0.0.2 ping statistics --- 00:20:57.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.254 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:57.254 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:57.254 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:57.254 00:20:57.254 --- 10.0.0.3 ping statistics --- 00:20:57.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.254 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:57.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:20:57.254 00:20:57.254 --- 10.0.0.1 ping statistics --- 00:20:57.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.254 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:57.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94796 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94796 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94796 ']' 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.254 06:07:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:57.254 [2024-07-13 06:07:48.921497] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:57.254 [2024-07-13 06:07:48.921847] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.512 [2024-07-13 06:07:49.063177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:57.512 [2024-07-13 06:07:49.107428] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.512 [2024-07-13 06:07:49.107724] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.512 [2024-07-13 06:07:49.107925] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.512 [2024-07-13 06:07:49.108255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.512 [2024-07-13 06:07:49.108509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.512 [2024-07-13 06:07:49.111664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.512 [2024-07-13 06:07:49.111696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.512 [2024-07-13 06:07:49.148419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:57.512 06:07:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.512 06:07:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:57.512 06:07:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:57.512 06:07:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:57.512 06:07:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:57.770 06:07:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.770 06:07:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94796 00:20:57.771 06:07:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:58.028 [2024-07-13 06:07:49.508005] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.028 06:07:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:58.287 Malloc0 00:20:58.287 06:07:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:58.545 06:07:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:58.804 06:07:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.062 [2024-07-13 06:07:50.541307] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.062 06:07:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:59.321 [2024-07-13 06:07:50.825550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:59.321 06:07:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94843 00:20:59.321 06:07:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:59.321 06:07:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.321 06:07:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94843 /var/tmp/bdevperf.sock 00:20:59.321 06:07:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94843 ']' 00:20:59.321 06:07:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.321 06:07:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.321 06:07:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.321 06:07:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.321 06:07:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:59.579 06:07:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.579 06:07:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:59.579 06:07:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:59.837 06:07:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:00.094 Nvme0n1 00:21:00.094 06:07:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:00.352 Nvme0n1 00:21:00.353 06:07:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:00.353 06:07:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:01.287 06:07:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:01.287 06:07:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:01.545 06:07:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:01.803 06:07:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:01.803 06:07:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94796 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:01.803 06:07:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94876 00:21:01.803 06:07:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:08.366 06:07:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:08.366 06:07:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:08.366 06:07:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:08.366 06:07:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:08.366 Attaching 4 probes... 00:21:08.366 @path[10.0.0.2, 4421]: 15598 00:21:08.366 @path[10.0.0.2, 4421]: 16110 00:21:08.366 @path[10.0.0.2, 4421]: 16071 00:21:08.366 @path[10.0.0.2, 4421]: 16049 00:21:08.366 @path[10.0.0.2, 4421]: 16058 00:21:08.366 06:07:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:08.366 06:07:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:08.366 06:07:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:08.366 06:07:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:08.366 06:07:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:08.366 06:07:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:08.366 06:07:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94876 00:21:08.366 06:07:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:08.366 06:07:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:08.366 06:07:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:08.366 06:08:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:08.622 06:08:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:08.622 06:08:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94796 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:08.622 06:08:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94989 00:21:08.622 06:08:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:15.233 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:15.233 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:15.233 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:15.234 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:15.234 Attaching 4 probes... 00:21:15.234 @path[10.0.0.2, 4420]: 16045 00:21:15.234 @path[10.0.0.2, 4420]: 16390 00:21:15.234 @path[10.0.0.2, 4420]: 16240 00:21:15.234 @path[10.0.0.2, 4420]: 16232 00:21:15.234 @path[10.0.0.2, 4420]: 16371 00:21:15.234 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:15.234 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:15.234 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:15.234 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:15.234 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:15.234 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:15.234 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94989 00:21:15.234 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:15.234 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:15.234 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:15.492 06:08:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:15.492 06:08:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:15.492 06:08:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95107 00:21:15.492 06:08:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94796 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:15.492 06:08:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:22.058 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:22.058 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:22.058 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:22.058 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:22.058 Attaching 4 probes... 00:21:22.058 @path[10.0.0.2, 4421]: 11918 00:21:22.058 @path[10.0.0.2, 4421]: 15821 00:21:22.058 @path[10.0.0.2, 4421]: 15720 00:21:22.058 @path[10.0.0.2, 4421]: 15704 00:21:22.058 @path[10.0.0.2, 4421]: 15827 00:21:22.058 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:22.058 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:22.058 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:22.058 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:22.058 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:22.058 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:22.058 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95107 00:21:22.058 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:22.058 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:22.058 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:22.318 06:08:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:22.576 06:08:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:22.576 06:08:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95219 00:21:22.576 06:08:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94796 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:22.576 06:08:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:29.139 Attaching 4 probes... 00:21:29.139 00:21:29.139 00:21:29.139 00:21:29.139 00:21:29.139 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95219 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:29.139 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:29.397 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:29.397 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95333 00:21:29.397 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94796 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:29.397 06:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:35.981 06:08:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:35.981 06:08:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:35.981 06:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:35.981 06:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:35.981 Attaching 4 probes... 00:21:35.981 @path[10.0.0.2, 4421]: 15341 00:21:35.981 @path[10.0.0.2, 4421]: 15659 00:21:35.981 @path[10.0.0.2, 4421]: 15560 00:21:35.981 @path[10.0.0.2, 4421]: 15623 00:21:35.981 @path[10.0.0.2, 4421]: 15648 00:21:35.981 06:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:35.981 06:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:35.981 06:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:35.981 06:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:35.981 06:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:35.981 06:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:35.981 06:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95333 00:21:35.981 06:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:35.981 06:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:35.981 06:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:36.933 06:08:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:36.933 06:08:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95457 00:21:36.933 06:08:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94796 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:36.933 06:08:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:43.497 06:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:43.497 06:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:43.497 06:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:43.497 06:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:43.497 Attaching 4 probes... 00:21:43.497 @path[10.0.0.2, 4420]: 15492 00:21:43.497 @path[10.0.0.2, 4420]: 15588 00:21:43.497 @path[10.0.0.2, 4420]: 15747 00:21:43.497 @path[10.0.0.2, 4420]: 15754 00:21:43.497 @path[10.0.0.2, 4420]: 15739 00:21:43.497 06:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:43.497 06:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:43.497 06:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:43.497 06:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:43.497 06:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:43.497 06:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:43.497 06:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95457 00:21:43.497 06:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:43.497 06:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:43.497 [2024-07-13 06:08:34.942986] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:43.497 06:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:43.497 06:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:50.059 06:08:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:50.059 06:08:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95626 00:21:50.059 06:08:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94796 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:50.059 06:08:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:56.637 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:56.637 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:56.637 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:56.637 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:56.637 Attaching 4 probes... 00:21:56.637 @path[10.0.0.2, 4421]: 16499 00:21:56.637 @path[10.0.0.2, 4421]: 17345 00:21:56.637 @path[10.0.0.2, 4421]: 18149 00:21:56.637 @path[10.0.0.2, 4421]: 18302 00:21:56.637 @path[10.0.0.2, 4421]: 18320 00:21:56.637 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:56.637 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:56.637 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:56.637 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:56.637 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:56.637 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:56.637 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95626 00:21:56.638 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:56.638 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94843 00:21:56.638 06:08:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94843 ']' 00:21:56.638 06:08:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94843 00:21:56.638 06:08:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:56.638 06:08:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.638 06:08:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94843 00:21:56.638 killing process with pid 94843 00:21:56.638 06:08:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:56.638 06:08:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:56.638 06:08:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94843' 00:21:56.638 06:08:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94843 00:21:56.638 06:08:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94843 00:21:56.638 Connection closed with partial response: 00:21:56.638 00:21:56.638 00:21:56.638 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94843 00:21:56.638 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:56.638 [2024-07-13 06:07:50.893255] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:21:56.638 [2024-07-13 06:07:50.893349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94843 ] 00:21:56.638 [2024-07-13 06:07:51.033562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.638 [2024-07-13 06:07:51.074043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.638 [2024-07-13 06:07:51.105985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:56.638 Running I/O for 90 seconds... 00:21:56.638 [2024-07-13 06:08:00.324543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.324637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.324696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.324720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.324744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.324760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.324783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.324799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.324821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.324837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.324859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.324875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.324897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.324913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.324935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.324951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.324973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.324989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-13 06:08:00.325647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.638 [2024-07-13 06:08:00.325685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.638 [2024-07-13 06:08:00.325724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.638 [2024-07-13 06:08:00.325762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.638 [2024-07-13 06:08:00.325800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.638 [2024-07-13 06:08:00.325838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.638 [2024-07-13 06:08:00.325876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.638 [2024-07-13 06:08:00.325914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.638 [2024-07-13 06:08:00.325952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.325973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.638 [2024-07-13 06:08:00.325989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:56.638 [2024-07-13 06:08:00.326011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.326028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.326074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.326114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.326152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.326191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.326243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.326282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.326348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.326403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.326442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.326480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.326518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.326556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.326594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.326642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.326680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.326720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.326758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.326796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.326833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.326872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.326909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.326947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.326969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.326985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.327023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.327060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.327122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.327161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-13 06:08:00.327200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.327238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.327275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.327313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.327352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.327407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.327446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.327484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.327522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.327560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.327605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.327645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-13 06:08:00.327682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:56.639 [2024-07-13 06:08:00.327704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.327721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.327742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.327758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.327780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.327796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.327818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.327834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.327856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.327872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.327894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.327909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.327932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.327947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.327969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.327988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.328030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.328078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.328119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.328160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.328200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.328241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.328975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.328997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.329014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.329035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.329051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.329073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.329089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.329125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.329160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.329183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.329199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.329229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.329246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.329283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.329298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.329320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.329335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.330906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-13 06:08:00.330938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.330968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.330986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.331023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.331040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.331077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.331093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:56.640 [2024-07-13 06:08:00.331115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-13 06:08:00.331131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:00.331153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:00.331169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:00.331191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:00.331207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:00.331229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:00.331246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:00.331449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:00.331477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:00.331515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:00.331533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:00.331556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:00.331572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:00.331594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:00.331613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:00.331635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:00.331652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:00.331674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:00.331690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:00.331712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:00.331729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:00.331751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:00.331767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:00.331793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:00.331811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.943675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.943755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.943903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.943932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.943969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.943987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.944040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.944115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.944155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.944192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.944229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.944266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.944304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.944347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.944384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.944438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.944476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-13 06:08:06.944513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-13 06:08:06.944551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-13 06:08:06.944598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-13 06:08:06.944640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-13 06:08:06.944678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-13 06:08:06.944716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-13 06:08:06.944753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-13 06:08:06.944791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.944829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.944851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.944892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.945101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.945127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.945152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.945169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.945191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.945207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.945230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-13 06:08:06.945246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:56.641 [2024-07-13 06:08:06.945277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.945298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.945360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.945424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.945465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.945506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.945546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.945586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.945640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.945677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.945714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.945750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.945787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.642 [2024-07-13 06:08:06.945824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.642 [2024-07-13 06:08:06.945871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.642 [2024-07-13 06:08:06.945925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.642 [2024-07-13 06:08:06.945963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.945985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.642 [2024-07-13 06:08:06.946002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.642 [2024-07-13 06:08:06.946040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.642 [2024-07-13 06:08:06.946078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.642 [2024-07-13 06:08:06.946127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:56.642 [2024-07-13 06:08:06.946772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-13 06:08:06.946788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.946810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.946825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.946847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.946863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.946886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.946909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.946932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.946948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.946971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.946986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.947024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.947062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.947100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.947149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-13 06:08:06.947793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.947830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.947868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.947924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.947962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.947992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.948013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.948036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.948052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.948079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.948096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.948118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.948134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.948157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.948180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.948203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.948219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.948241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.948257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.948279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.948295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.948317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.948333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.948355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.948384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.948417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.948434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.948456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.948472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:56.643 [2024-07-13 06:08:06.948494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-13 06:08:06.948510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.948532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:06.948548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.948570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:06.948586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.948608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:06.948623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.948646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:06.948662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.948684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:06.948700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.948722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:06.948738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.949516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:06.949547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.949585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:06.949603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.949634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:06.949651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.949682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:06.949710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.949742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:06.949774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.949822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:06.949838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.949869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:06.949885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.949916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:06.949932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.949979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:06.950000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.950031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:06.950047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.950079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:06.950095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.950126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:06.950143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:06.950174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:33440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:06.950190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.049481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:14.049542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.049598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:14.049621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.049645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:14.049678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.049703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:14.049719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.049741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:14.049757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.049778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:14.049793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.049815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:14.049830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.049852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.644 [2024-07-13 06:08:14.049868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.049889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:14.049905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.049927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:14.049942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.049964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:14.049979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.050000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:14.050016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.050038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:14.050053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.050075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:14.050090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.050111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:14.050127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.050158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:14.050175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.050197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:14.050213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.050261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:14.050280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.050313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:14.050329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.050351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:14.050379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.050404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:14.050420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.050442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.644 [2024-07-13 06:08:14.050458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:56.644 [2024-07-13 06:08:14.050480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.050496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.050518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.050533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.050555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.050570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.050592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.050607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.050629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.050645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.050676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.050693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.050715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.050731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.050753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.050768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.050790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.050806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.050828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.050844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.050870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.050888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.050911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.050928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.050949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.050965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.050987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.051002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.051039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.051077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.051114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.051159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.051197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.051235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.051272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.051309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.645 [2024-07-13 06:08:14.051949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.051971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.051987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.052009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.052024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.052046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.052063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.052084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.645 [2024-07-13 06:08:14.052100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:56.645 [2024-07-13 06:08:14.052132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.052151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.052190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.052227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.052264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.052302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.052339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.052389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.052428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.052477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.052514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.052551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.052589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.052636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.052674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.052712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.052749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.052787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.052826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.052863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.052901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.052939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.052960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.052976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.053025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.053062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.053112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.053152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.053190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.053228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.053266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.053304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.053341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.646 [2024-07-13 06:08:14.053391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.053430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.053468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.053506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.646 [2024-07-13 06:08:14.053543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:56.646 [2024-07-13 06:08:14.053565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.053588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.053611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.053627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.053649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.053664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.053686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.053702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.053724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.053739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.053761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.053777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.053799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.053815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.053838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.053854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.053875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.053891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.053913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.053929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.053951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.053966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.053988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.054004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.647 [2024-07-13 06:08:14.054041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.647 [2024-07-13 06:08:14.054090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.647 [2024-07-13 06:08:14.054128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.647 [2024-07-13 06:08:14.054164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.647 [2024-07-13 06:08:14.054202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.647 [2024-07-13 06:08:14.054251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.647 [2024-07-13 06:08:14.054288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.647 [2024-07-13 06:08:14.054326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.054363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.054415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.054453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.054491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.054528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.054574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.054596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.054612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:14.055016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:14.055043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:27.434084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:27.434143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:27.434171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:27.434188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:27.434204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:27.434243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:27.434261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:27.434276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:27.434292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:27.434306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:27.434321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:27.434336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:27.434351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:27.434378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:27.434398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:27.434413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:27.434429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:27.434443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:27.434459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:27.434493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:27.434510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:27.434535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:27.434551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:27.434566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:27.434582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.647 [2024-07-13 06:08:27.434596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.647 [2024-07-13 06:08:27.434612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.434627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.434642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.434657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.434672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.434686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.434701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.434716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.434734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.434748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.434764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.434778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.434793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.434808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.434823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.434838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.434883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.434898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.434920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.434935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.434951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.434965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.434980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.434995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.435025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.435055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.435086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.435131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.435177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.435222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.435251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.648 [2024-07-13 06:08:27.435840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.435887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.435932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.435961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.435976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.436007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.436022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.436036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.436052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.436066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.436082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.436096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.436111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.648 [2024-07-13 06:08:27.436140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.648 [2024-07-13 06:08:27.436156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.436371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.436401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.436431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.436461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.436503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.436535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.436565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.436595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.436919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.436965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.436982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.436996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.437027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.437057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.437088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.437118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.437154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.437184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.437214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.437244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.437274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.437352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.437381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.437410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.437439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.649 [2024-07-13 06:08:27.437468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.437541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.437575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.649 [2024-07-13 06:08:27.437607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.649 [2024-07-13 06:08:27.437621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.437643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.437658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.437674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.437688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.437703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.437718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.437734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.437748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.437764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.437778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.437793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.437808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.437823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.437838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.437853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.437867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.437884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.437898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.437914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.437928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.437944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.437958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.437974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.437988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.438023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.438054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.438086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.650 [2024-07-13 06:08:27.438116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.650 [2024-07-13 06:08:27.438147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.650 [2024-07-13 06:08:27.438191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.650 [2024-07-13 06:08:27.438245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.650 [2024-07-13 06:08:27.438277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.650 [2024-07-13 06:08:27.438307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.650 [2024-07-13 06:08:27.438337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.650 [2024-07-13 06:08:27.438376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.650 [2024-07-13 06:08:27.438408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.650 [2024-07-13 06:08:27.438438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.650 [2024-07-13 06:08:27.438483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.650 [2024-07-13 06:08:27.438513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.650 [2024-07-13 06:08:27.438542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.650 [2024-07-13 06:08:27.438572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.650 [2024-07-13 06:08:27.438643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.650 [2024-07-13 06:08:27.438654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30472 len:8 PRP1 0x0 PRP2 0x0 00:21:56.650 [2024-07-13 06:08:27.438670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.650 [2024-07-13 06:08:27.438741] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23678a0 was disconnected and freed. reset controller. 00:21:56.650 [2024-07-13 06:08:27.439983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:56.650 [2024-07-13 06:08:27.440080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23763d0 (9): Bad file descriptor 00:21:56.650 [2024-07-13 06:08:27.440535] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.650 [2024-07-13 06:08:27.440569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23763d0 with addr=10.0.0.2, port=4421 00:21:56.650 [2024-07-13 06:08:27.440596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23763d0 is same with the state(5) to be set 00:21:56.650 [2024-07-13 06:08:27.440657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23763d0 (9): Bad file descriptor 00:21:56.650 [2024-07-13 06:08:27.440695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:56.650 [2024-07-13 06:08:27.440712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:56.650 [2024-07-13 06:08:27.440726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:56.650 [2024-07-13 06:08:27.440933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.650 [2024-07-13 06:08:27.440957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:56.650 [2024-07-13 06:08:37.504893] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:56.650 Received shutdown signal, test time was about 55.439918 seconds 00:21:56.650 00:21:56.651 Latency(us) 00:21:56.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.651 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:56.651 Verification LBA range: start 0x0 length 0x4000 00:21:56.651 Nvme0n1 : 55.44 7040.33 27.50 0.00 0.00 18150.08 218.76 7046430.72 00:21:56.651 =================================================================================================================== 00:21:56.651 Total : 7040.33 27.50 0.00 0.00 18150.08 218.76 7046430.72 00:21:56.651 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:56.651 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:56.651 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:56.651 06:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:56.651 06:08:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:56.651 06:08:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:21:56.651 06:08:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:56.651 06:08:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:21:56.651 06:08:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:56.651 06:08:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:56.651 rmmod nvme_tcp 00:21:56.651 rmmod nvme_fabrics 00:21:56.651 rmmod nvme_keyring 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94796 ']' 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94796 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94796 ']' 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94796 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94796 00:21:56.651 killing process with pid 94796 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94796' 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94796 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94796 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:56.651 00:21:56.651 real 0m59.891s 00:21:56.651 user 2m46.718s 00:21:56.651 sys 0m17.820s 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:56.651 06:08:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:56.651 ************************************ 00:21:56.651 END TEST nvmf_host_multipath 00:21:56.651 ************************************ 00:21:56.651 06:08:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:56.651 06:08:48 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:56.651 06:08:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:56.651 06:08:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.651 06:08:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:56.651 ************************************ 00:21:56.651 START TEST nvmf_timeout 00:21:56.651 ************************************ 00:21:56.651 06:08:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:56.928 * Looking for test storage... 00:21:56.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.928 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:56.929 Cannot find device "nvmf_tgt_br" 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:56.929 Cannot find device "nvmf_tgt_br2" 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:56.929 Cannot find device "nvmf_tgt_br" 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:56.929 Cannot find device "nvmf_tgt_br2" 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:56.929 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:56.929 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:56.929 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:57.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:21:57.187 00:21:57.187 --- 10.0.0.2 ping statistics --- 00:21:57.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.187 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:57.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:57.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:21:57.187 00:21:57.187 --- 10.0.0.3 ping statistics --- 00:21:57.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.187 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:57.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:57.187 00:21:57.187 --- 10.0.0.1 ping statistics --- 00:21:57.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.187 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:57.187 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:57.188 06:08:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:57.188 06:08:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:57.188 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=95934 00:21:57.188 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 95934 00:21:57.188 06:08:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 95934 ']' 00:21:57.188 06:08:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.188 06:08:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:57.188 06:08:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.188 06:08:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.188 06:08:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.188 06:08:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:57.188 [2024-07-13 06:08:48.838995] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:21:57.188 [2024-07-13 06:08:48.839100] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.446 [2024-07-13 06:08:48.973566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:57.446 [2024-07-13 06:08:49.006843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.446 [2024-07-13 06:08:49.007089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.446 [2024-07-13 06:08:49.007159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.446 [2024-07-13 06:08:49.007315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.446 [2024-07-13 06:08:49.007443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.446 [2024-07-13 06:08:49.007653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.446 [2024-07-13 06:08:49.007666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.446 [2024-07-13 06:08:49.035013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:57.446 06:08:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.446 06:08:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:57.446 06:08:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.446 06:08:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.446 06:08:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:57.446 06:08:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.446 06:08:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:57.446 06:08:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:57.704 [2024-07-13 06:08:49.381479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.704 06:08:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:57.962 Malloc0 00:21:58.220 06:08:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:58.220 06:08:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:58.479 06:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.738 [2024-07-13 06:08:50.420168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.738 06:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=95977 00:21:58.738 06:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:58.738 06:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 95977 /var/tmp/bdevperf.sock 00:21:58.738 06:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 95977 ']' 00:21:58.738 06:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.738 06:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.738 06:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.738 06:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.738 06:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:58.997 [2024-07-13 06:08:50.493603] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:21:58.997 [2024-07-13 06:08:50.493707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95977 ] 00:21:58.997 [2024-07-13 06:08:50.634451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.997 [2024-07-13 06:08:50.676038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.997 [2024-07-13 06:08:50.708744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:59.932 06:08:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.932 06:08:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:59.932 06:08:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:00.191 06:08:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:00.449 NVMe0n1 00:22:00.450 06:08:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:00.450 06:08:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96001 00:22:00.450 06:08:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:00.450 Running I/O for 10 seconds... 00:22:01.385 06:08:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.645 [2024-07-13 06:08:53.228135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.645 [2024-07-13 06:08:53.228734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.228994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3e680 is same with the state(5) to be set 00:22:01.646 [2024-07-13 06:08:53.229281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.646 [2024-07-13 06:08:53.229312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.646 [2024-07-13 06:08:53.229334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.646 [2024-07-13 06:08:53.229346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.646 [2024-07-13 06:08:53.229358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.646 [2024-07-13 06:08:53.229367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.646 [2024-07-13 06:08:53.229407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.646 [2024-07-13 06:08:53.229417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.646 [2024-07-13 06:08:53.229427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.646 [2024-07-13 06:08:53.229436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.646 [2024-07-13 06:08:53.229447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.646 [2024-07-13 06:08:53.229456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.646 [2024-07-13 06:08:53.229467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.646 [2024-07-13 06:08:53.229476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.646 [2024-07-13 06:08:53.229499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.646 [2024-07-13 06:08:53.229510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.646 [2024-07-13 06:08:53.229521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.646 [2024-07-13 06:08:53.229530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.646 [2024-07-13 06:08:53.229541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.646 [2024-07-13 06:08:53.229550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.646 [2024-07-13 06:08:53.229561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.646 [2024-07-13 06:08:53.229569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.646 [2024-07-13 06:08:53.229580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.646 [2024-07-13 06:08:53.229605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.229981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.229990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.647 [2024-07-13 06:08:53.230438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.647 [2024-07-13 06:08:53.230449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.230984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.230995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.231005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.231016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.231026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.231037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.231046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.231057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.231067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.231078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.231088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.231100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.231109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.231121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.231130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.231142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.231151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.231165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.231174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.231186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.231196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.231208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.231217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.231229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.231238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.231260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.648 [2024-07-13 06:08:53.231269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.648 [2024-07-13 06:08:53.231280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.649 [2024-07-13 06:08:53.231874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.649 [2024-07-13 06:08:53.231899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.649 [2024-07-13 06:08:53.231921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.649 [2024-07-13 06:08:53.231941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.649 [2024-07-13 06:08:53.231962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.649 [2024-07-13 06:08:53.231983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.231994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.649 [2024-07-13 06:08:53.232003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.649 [2024-07-13 06:08:53.232015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.650 [2024-07-13 06:08:53.232025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-07-13 06:08:53.232037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.650 [2024-07-13 06:08:53.232046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-07-13 06:08:53.232058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.650 [2024-07-13 06:08:53.232067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-07-13 06:08:53.232078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.650 [2024-07-13 06:08:53.232087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-07-13 06:08:53.232099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.650 [2024-07-13 06:08:53.232108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-07-13 06:08:53.232119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.650 [2024-07-13 06:08:53.232128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-07-13 06:08:53.232140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.650 [2024-07-13 06:08:53.232149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-07-13 06:08:53.232160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.650 [2024-07-13 06:08:53.232169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-07-13 06:08:53.232180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.650 [2024-07-13 06:08:53.232190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-07-13 06:08:53.232201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.650 [2024-07-13 06:08:53.232210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-07-13 06:08:53.232221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc6f0 is same with the state(5) to be set 00:22:01.650 [2024-07-13 06:08:53.232235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.650 [2024-07-13 06:08:53.232243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.650 [2024-07-13 06:08:53.232252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63648 len:8 PRP1 0x0 PRP2 0x0 00:22:01.650 [2024-07-13 06:08:53.232261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-07-13 06:08:53.232304] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xadc6f0 was disconnected and freed. reset controller. 00:22:01.650 [2024-07-13 06:08:53.232581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.650 [2024-07-13 06:08:53.232663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabd760 (9): Bad file descriptor 00:22:01.650 [2024-07-13 06:08:53.232761] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.650 [2024-07-13 06:08:53.232783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabd760 with addr=10.0.0.2, port=4420 00:22:01.650 [2024-07-13 06:08:53.232794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabd760 is same with the state(5) to be set 00:22:01.650 [2024-07-13 06:08:53.232811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabd760 (9): Bad file descriptor 00:22:01.650 [2024-07-13 06:08:53.232828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.650 [2024-07-13 06:08:53.232841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.650 [2024-07-13 06:08:53.232851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.650 [2024-07-13 06:08:53.232871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.650 [2024-07-13 06:08:53.232883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.650 06:08:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:03.549 [2024-07-13 06:08:55.233140] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.549 [2024-07-13 06:08:55.233205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabd760 with addr=10.0.0.2, port=4420 00:22:03.549 [2024-07-13 06:08:55.233220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabd760 is same with the state(5) to be set 00:22:03.549 [2024-07-13 06:08:55.233245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabd760 (9): Bad file descriptor 00:22:03.549 [2024-07-13 06:08:55.233264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.549 [2024-07-13 06:08:55.233273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.549 [2024-07-13 06:08:55.233283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.549 [2024-07-13 06:08:55.233308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.549 [2024-07-13 06:08:55.233336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.549 06:08:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:03.549 06:08:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:03.549 06:08:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:04.115 06:08:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:04.115 06:08:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:04.115 06:08:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:04.115 06:08:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:04.115 06:08:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:04.115 06:08:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:06.015 [2024-07-13 06:08:57.233630] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.015 [2024-07-13 06:08:57.233684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabd760 with addr=10.0.0.2, port=4420 00:22:06.015 [2024-07-13 06:08:57.233700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabd760 is same with the state(5) to be set 00:22:06.015 [2024-07-13 06:08:57.233725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabd760 (9): Bad file descriptor 00:22:06.015 [2024-07-13 06:08:57.233745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:06.015 [2024-07-13 06:08:57.233755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:06.015 [2024-07-13 06:08:57.233766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.015 [2024-07-13 06:08:57.233808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.015 [2024-07-13 06:08:57.233821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:07.914 [2024-07-13 06:08:59.233953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:07.914 [2024-07-13 06:08:59.234010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:07.914 [2024-07-13 06:08:59.234022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:07.914 [2024-07-13 06:08:59.234032] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:07.914 [2024-07-13 06:08:59.234057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:08.847 00:22:08.847 Latency(us) 00:22:08.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.847 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:08.847 Verification LBA range: start 0x0 length 0x4000 00:22:08.848 NVMe0n1 : 8.15 962.01 3.76 15.70 0.00 130845.13 3589.59 7046430.72 00:22:08.848 =================================================================================================================== 00:22:08.848 Total : 962.01 3.76 15.70 0.00 130845.13 3589.59 7046430.72 00:22:08.848 0 00:22:09.106 06:09:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:09.106 06:09:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:09.106 06:09:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:09.364 06:09:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:09.364 06:09:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:09.364 06:09:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:09.364 06:09:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:09.621 06:09:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:09.622 06:09:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96001 00:22:09.622 06:09:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 95977 00:22:09.622 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 95977 ']' 00:22:09.622 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 95977 00:22:09.880 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:09.880 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:09.880 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95977 00:22:09.880 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:09.880 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:09.880 killing process with pid 95977 00:22:09.880 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95977' 00:22:09.880 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 95977 00:22:09.880 Received shutdown signal, test time was about 9.291136 seconds 00:22:09.880 00:22:09.880 Latency(us) 00:22:09.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.880 =================================================================================================================== 00:22:09.880 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:09.880 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 95977 00:22:09.880 06:09:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.138 [2024-07-13 06:09:01.748057] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.138 06:09:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96118 00:22:10.138 06:09:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:10.138 06:09:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96118 /var/tmp/bdevperf.sock 00:22:10.138 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96118 ']' 00:22:10.138 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.138 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:10.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.138 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.138 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:10.138 06:09:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:10.138 [2024-07-13 06:09:01.822689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:10.138 [2024-07-13 06:09:01.822780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96118 ] 00:22:10.396 [2024-07-13 06:09:01.964719] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.396 [2024-07-13 06:09:02.007584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.396 [2024-07-13 06:09:02.041273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:10.396 06:09:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:10.396 06:09:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:10.396 06:09:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:10.654 06:09:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:10.912 NVMe0n1 00:22:11.171 06:09:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96134 00:22:11.171 06:09:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:11.171 06:09:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:11.171 Running I/O for 10 seconds... 00:22:12.158 06:09:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.420 [2024-07-13 06:09:03.976575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.976991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.977000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.977008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.977016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.977024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.977014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.420 [2024-07-13 06:09:03.977048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.977057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.977061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-07-13 06:09:03.977066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.977075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-13 06:09:03.977075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:12.420 the state(5) to be set 00:22:12.420 [2024-07-13 06:09:03.977085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with [2024-07-13 06:09:03.977086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:22:12.420 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.977102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.421 [2024-07-13 06:09:03.977111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.977120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.421 [2024-07-13 06:09:03.977129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.977138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32760 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.977498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ea10 is same with the state(5) to be set 00:22:12.421 [2024-07-13 06:09:03.978464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-07-13 06:09:03.978941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-07-13 06:09:03.978950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.978961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.978970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.978981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.978990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-13 06:09:03.979962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-07-13 06:09:03.979970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.979999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-13 06:09:03.980631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.423 [2024-07-13 06:09:03.980967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-13 06:09:03.980978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-07-13 06:09:03.980989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.980999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf51550 is same with the state(5) to be set 00:22:12.424 [2024-07-13 06:09:03.981011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.981019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.981027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61600 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.981036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.981046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.981053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.981061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61728 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.981070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.981079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.981089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.981097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61736 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.981106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.981115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.981123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.981131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61744 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.981140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.981149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.981157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.981164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61752 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.981173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.981182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.981189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.981197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61760 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.981205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.981215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.981222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.981230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61768 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.981238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.981248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.981255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.981264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61776 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.981273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.981283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.981290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.981298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61784 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.981307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.981316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.981323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.981331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61792 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.981339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.981348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.981357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.981365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61800 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.981374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.981383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.981404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.981412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61808 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.981420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.981439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.981447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.981455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61816 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.981463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.981472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.981479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.995772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61824 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.995834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.995850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.995858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.995868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61832 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.995877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.995887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.995894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.995902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61840 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.995911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.995920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.995927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.995935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61848 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.995944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.995953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.995960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.995984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61856 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.995993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.996002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.996010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.996018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61864 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.996027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.996036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.424 [2024-07-13 06:09:03.996058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.424 [2024-07-13 06:09:03.996065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61872 len:8 PRP1 0x0 PRP2 0x0 00:22:12.424 [2024-07-13 06:09:03.996074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-13 06:09:03.996118] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf51550 was disconnected and freed. reset controller. 00:22:12.424 [2024-07-13 06:09:03.996187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf32760 (9): Bad file descriptor 00:22:12.425 [2024-07-13 06:09:03.996481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.425 [2024-07-13 06:09:03.996591] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.425 [2024-07-13 06:09:03.996615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf32760 with addr=10.0.0.2, port=4420 00:22:12.425 [2024-07-13 06:09:03.996628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32760 is same with the state(5) to be set 00:22:12.425 [2024-07-13 06:09:03.996647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf32760 (9): Bad file descriptor 00:22:12.425 [2024-07-13 06:09:03.996664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:12.425 [2024-07-13 06:09:03.996674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:12.425 [2024-07-13 06:09:03.996684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.425 [2024-07-13 06:09:03.996705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.425 [2024-07-13 06:09:03.996716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.425 06:09:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:13.360 [2024-07-13 06:09:04.996830] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.360 [2024-07-13 06:09:04.996915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf32760 with addr=10.0.0.2, port=4420 00:22:13.360 [2024-07-13 06:09:04.996931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32760 is same with the state(5) to be set 00:22:13.360 [2024-07-13 06:09:04.996957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf32760 (9): Bad file descriptor 00:22:13.360 [2024-07-13 06:09:04.996977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:13.360 [2024-07-13 06:09:04.996987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:13.360 [2024-07-13 06:09:04.996997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:13.360 [2024-07-13 06:09:04.997025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.360 [2024-07-13 06:09:04.997037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:13.360 06:09:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.620 [2024-07-13 06:09:05.255345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.620 06:09:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 96134 00:22:14.557 [2024-07-13 06:09:06.017166] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:21.120 00:22:21.121 Latency(us) 00:22:21.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.121 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.121 Verification LBA range: start 0x0 length 0x4000 00:22:21.121 NVMe0n1 : 10.01 5745.78 22.44 0.00 0.00 22237.27 1608.61 3050402.91 00:22:21.121 =================================================================================================================== 00:22:21.121 Total : 5745.78 22.44 0.00 0.00 22237.27 1608.61 3050402.91 00:22:21.121 0 00:22:21.121 06:09:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96241 00:22:21.121 06:09:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:21.121 06:09:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:21.378 Running I/O for 10 seconds... 00:22:22.313 06:09:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.575 [2024-07-13 06:09:14.076604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.575 [2024-07-13 06:09:14.076689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.076751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.076771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.076785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.076795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.076808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.076817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.076828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.076852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.076864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.076873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.076884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.076893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.076905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.076914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.076925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.076934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.076962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.076971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.076987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.076996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.575 [2024-07-13 06:09:14.077578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.575 [2024-07-13 06:09:14.077587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.077989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.077998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.576 [2024-07-13 06:09:14.078466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.576 [2024-07-13 06:09:14.078475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.078987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.078996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.079017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.079037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.079058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.079078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.079099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.577 [2024-07-13 06:09:14.079119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.577 [2024-07-13 06:09:14.079139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.577 [2024-07-13 06:09:14.079160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.577 [2024-07-13 06:09:14.079181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.577 [2024-07-13 06:09:14.079201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.577 [2024-07-13 06:09:14.079222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.577 [2024-07-13 06:09:14.079242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.577 [2024-07-13 06:09:14.079263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.577 [2024-07-13 06:09:14.079290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.577 [2024-07-13 06:09:14.079312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.577 [2024-07-13 06:09:14.079333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.577 [2024-07-13 06:09:14.079353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.577 [2024-07-13 06:09:14.079364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.578 [2024-07-13 06:09:14.079385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.578 [2024-07-13 06:09:14.079397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.578 [2024-07-13 06:09:14.079406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.578 [2024-07-13 06:09:14.079417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.578 [2024-07-13 06:09:14.079427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.578 [2024-07-13 06:09:14.079438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.578 [2024-07-13 06:09:14.079448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.578 [2024-07-13 06:09:14.079459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.578 [2024-07-13 06:09:14.079469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.578 [2024-07-13 06:09:14.079479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30270 is same with the state(5) to be set 00:22:22.578 [2024-07-13 06:09:14.079493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.578 [2024-07-13 06:09:14.079501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.578 [2024-07-13 06:09:14.079509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58520 len:8 PRP1 0x0 PRP2 0x0 00:22:22.578 [2024-07-13 06:09:14.079518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.578 [2024-07-13 06:09:14.079562] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf30270 was disconnected and freed. reset controller. 00:22:22.578 [2024-07-13 06:09:14.079642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.578 [2024-07-13 06:09:14.079659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.578 [2024-07-13 06:09:14.079670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.578 [2024-07-13 06:09:14.079680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.578 [2024-07-13 06:09:14.079690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.578 [2024-07-13 06:09:14.079699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.578 [2024-07-13 06:09:14.079709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.578 [2024-07-13 06:09:14.079718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.578 [2024-07-13 06:09:14.079730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32760 is same with the state(5) to be set 00:22:22.578 [2024-07-13 06:09:14.079957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:22.578 [2024-07-13 06:09:14.079990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf32760 (9): Bad file descriptor 00:22:22.578 [2024-07-13 06:09:14.080085] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.578 [2024-07-13 06:09:14.080118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf32760 with addr=10.0.0.2, port=4420 00:22:22.578 [2024-07-13 06:09:14.080130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32760 is same with the state(5) to be set 00:22:22.578 [2024-07-13 06:09:14.080149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf32760 (9): Bad file descriptor 00:22:22.578 [2024-07-13 06:09:14.080165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:22.578 [2024-07-13 06:09:14.080175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:22.578 [2024-07-13 06:09:14.080186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:22.578 [2024-07-13 06:09:14.080206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:22.578 [2024-07-13 06:09:14.080218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:22.578 06:09:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:23.515 [2024-07-13 06:09:15.080339] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.515 [2024-07-13 06:09:15.080405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf32760 with addr=10.0.0.2, port=4420 00:22:23.515 [2024-07-13 06:09:15.080423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32760 is same with the state(5) to be set 00:22:23.515 [2024-07-13 06:09:15.080448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf32760 (9): Bad file descriptor 00:22:23.515 [2024-07-13 06:09:15.080466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:23.515 [2024-07-13 06:09:15.080476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:23.515 [2024-07-13 06:09:15.080486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:23.515 [2024-07-13 06:09:15.080515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.515 [2024-07-13 06:09:15.080529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:24.452 [2024-07-13 06:09:16.080679] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.452 [2024-07-13 06:09:16.080735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf32760 with addr=10.0.0.2, port=4420 00:22:24.452 [2024-07-13 06:09:16.080751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32760 is same with the state(5) to be set 00:22:24.452 [2024-07-13 06:09:16.080777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf32760 (9): Bad file descriptor 00:22:24.452 [2024-07-13 06:09:16.080796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:24.452 [2024-07-13 06:09:16.080806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:24.452 [2024-07-13 06:09:16.080818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:24.452 [2024-07-13 06:09:16.080845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:24.452 [2024-07-13 06:09:16.080864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:25.389 [2024-07-13 06:09:17.084830] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.389 [2024-07-13 06:09:17.084918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf32760 with addr=10.0.0.2, port=4420 00:22:25.389 [2024-07-13 06:09:17.084950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32760 is same with the state(5) to be set 00:22:25.389 [2024-07-13 06:09:17.085219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf32760 (9): Bad file descriptor 00:22:25.389 [2024-07-13 06:09:17.085486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:25.389 [2024-07-13 06:09:17.085509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:25.389 [2024-07-13 06:09:17.085522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:25.389 [2024-07-13 06:09:17.089803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.389 [2024-07-13 06:09:17.089850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:25.389 06:09:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.648 [2024-07-13 06:09:17.353574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.648 06:09:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 96241 00:22:26.585 [2024-07-13 06:09:18.130903] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:31.873 00:22:31.873 Latency(us) 00:22:31.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.873 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:31.873 Verification LBA range: start 0x0 length 0x4000 00:22:31.873 NVMe0n1 : 10.01 4848.19 18.94 3316.69 0.00 15644.13 696.32 3019898.88 00:22:31.873 =================================================================================================================== 00:22:31.873 Total : 4848.19 18.94 3316.69 0.00 15644.13 0.00 3019898.88 00:22:31.873 0 00:22:31.873 06:09:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96118 00:22:31.873 06:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96118 ']' 00:22:31.873 06:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96118 00:22:31.873 06:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:31.873 06:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:31.873 06:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96118 00:22:31.873 killing process with pid 96118 00:22:31.873 Received shutdown signal, test time was about 10.000000 seconds 00:22:31.873 00:22:31.873 Latency(us) 00:22:31.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.873 =================================================================================================================== 00:22:31.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.873 06:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:31.873 06:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:31.873 06:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96118' 00:22:31.873 06:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96118 00:22:31.873 06:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96118 00:22:31.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.873 06:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96351 00:22:31.873 06:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96351 /var/tmp/bdevperf.sock 00:22:31.873 06:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:31.873 06:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96351 ']' 00:22:31.873 06:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.873 06:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.873 06:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.873 06:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.873 06:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:31.873 [2024-07-13 06:09:23.208505] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:31.873 [2024-07-13 06:09:23.209169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96351 ] 00:22:31.873 [2024-07-13 06:09:23.344748] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.873 [2024-07-13 06:09:23.384326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.873 [2024-07-13 06:09:23.417373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:31.873 06:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.873 06:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:31.873 06:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96358 00:22:31.873 06:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:31.873 06:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96351 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:32.132 06:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:32.390 NVMe0n1 00:22:32.390 06:09:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96401 00:22:32.390 06:09:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:32.390 06:09:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:32.648 Running I/O for 10 seconds... 00:22:33.583 06:09:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:33.845 [2024-07-13 06:09:25.370180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.845 [2024-07-13 06:09:25.370643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.370999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44400 is same with the state(5) to be set 00:22:33.846 [2024-07-13 06:09:25.371562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.846 [2024-07-13 06:09:25.371609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.846 [2024-07-13 06:09:25.371631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.846 [2024-07-13 06:09:25.371642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.846 [2024-07-13 06:09:25.371654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.371983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.371995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.847 [2024-07-13 06:09:25.372576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.847 [2024-07-13 06:09:25.372585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.372989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.372998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.848 [2024-07-13 06:09:25.373593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.848 [2024-07-13 06:09:25.373604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.373987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.373999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.849 [2024-07-13 06:09:25.374350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.849 [2024-07-13 06:09:25.374361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.850 [2024-07-13 06:09:25.374381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.850 [2024-07-13 06:09:25.374394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.850 [2024-07-13 06:09:25.374403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.850 [2024-07-13 06:09:25.374414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.850 [2024-07-13 06:09:25.374424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.850 [2024-07-13 06:09:25.374435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.850 [2024-07-13 06:09:25.374444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.850 [2024-07-13 06:09:25.374456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.850 [2024-07-13 06:09:25.374465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.850 [2024-07-13 06:09:25.374476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.850 [2024-07-13 06:09:25.374486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.850 [2024-07-13 06:09:25.374496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc5f40 is same with the state(5) to be set 00:22:33.850 [2024-07-13 06:09:25.374509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.850 [2024-07-13 06:09:25.374516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.850 [2024-07-13 06:09:25.374526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80728 len:8 PRP1 0x0 PRP2 0x0 00:22:33.850 [2024-07-13 06:09:25.374538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.850 [2024-07-13 06:09:25.374582] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fc5f40 was disconnected and freed. reset controller. 00:22:33.850 [2024-07-13 06:09:25.374855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:33.850 [2024-07-13 06:09:25.374954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f90920 (9): Bad file descriptor 00:22:33.850 [2024-07-13 06:09:25.375072] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.850 [2024-07-13 06:09:25.375116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f90920 with addr=10.0.0.2, port=4420 00:22:33.850 [2024-07-13 06:09:25.375128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f90920 is same with the state(5) to be set 00:22:33.850 [2024-07-13 06:09:25.375147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f90920 (9): Bad file descriptor 00:22:33.850 [2024-07-13 06:09:25.375165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:33.850 [2024-07-13 06:09:25.375178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:33.850 [2024-07-13 06:09:25.375188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:33.850 [2024-07-13 06:09:25.375210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.850 [2024-07-13 06:09:25.375221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:33.850 06:09:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 96401 00:22:35.752 [2024-07-13 06:09:27.375522] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.752 [2024-07-13 06:09:27.375586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f90920 with addr=10.0.0.2, port=4420 00:22:35.752 [2024-07-13 06:09:27.375602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f90920 is same with the state(5) to be set 00:22:35.752 [2024-07-13 06:09:27.375628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f90920 (9): Bad file descriptor 00:22:35.752 [2024-07-13 06:09:27.375648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:35.752 [2024-07-13 06:09:27.375659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:35.752 [2024-07-13 06:09:27.375670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:35.752 [2024-07-13 06:09:27.375697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:35.752 [2024-07-13 06:09:27.375709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:37.657 [2024-07-13 06:09:29.375993] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.657 [2024-07-13 06:09:29.376065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f90920 with addr=10.0.0.2, port=4420 00:22:37.657 [2024-07-13 06:09:29.376081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f90920 is same with the state(5) to be set 00:22:37.657 [2024-07-13 06:09:29.376108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f90920 (9): Bad file descriptor 00:22:37.657 [2024-07-13 06:09:29.376128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:37.657 [2024-07-13 06:09:29.376139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:37.657 [2024-07-13 06:09:29.376150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:37.657 [2024-07-13 06:09:29.376179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.657 [2024-07-13 06:09:29.376190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:40.190 [2024-07-13 06:09:31.376422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:40.190 [2024-07-13 06:09:31.376501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:40.190 [2024-07-13 06:09:31.376514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:40.190 [2024-07-13 06:09:31.376524] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:40.190 [2024-07-13 06:09:31.376551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:40.757 00:22:40.757 Latency(us) 00:22:40.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.757 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:40.757 NVMe0n1 : 8.20 1982.78 7.75 15.61 0.00 63936.07 8460.10 7015926.69 00:22:40.757 =================================================================================================================== 00:22:40.757 Total : 1982.78 7.75 15.61 0.00 63936.07 8460.10 7015926.69 00:22:40.757 0 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:40.757 Attaching 5 probes... 00:22:40.757 1384.275468: reset bdev controller NVMe0 00:22:40.757 1384.435139: reconnect bdev controller NVMe0 00:22:40.757 3384.774596: reconnect delay bdev controller NVMe0 00:22:40.757 3384.818699: reconnect bdev controller NVMe0 00:22:40.757 5385.276491: reconnect delay bdev controller NVMe0 00:22:40.757 5385.311592: reconnect bdev controller NVMe0 00:22:40.757 7385.792263: reconnect delay bdev controller NVMe0 00:22:40.757 7385.834678: reconnect bdev controller NVMe0 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 96358 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96351 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96351 ']' 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96351 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96351 00:22:40.757 killing process with pid 96351 00:22:40.757 Received shutdown signal, test time was about 8.259897 seconds 00:22:40.757 00:22:40.757 Latency(us) 00:22:40.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.757 =================================================================================================================== 00:22:40.757 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96351' 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96351 00:22:40.757 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96351 00:22:41.015 06:09:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.275 rmmod nvme_tcp 00:22:41.275 rmmod nvme_fabrics 00:22:41.275 rmmod nvme_keyring 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 95934 ']' 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 95934 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 95934 ']' 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 95934 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95934 00:22:41.275 killing process with pid 95934 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95934' 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 95934 00:22:41.275 06:09:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 95934 00:22:41.541 06:09:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:41.541 06:09:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:41.541 06:09:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:41.541 06:09:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:41.541 06:09:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:41.541 06:09:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.541 06:09:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.542 06:09:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.542 06:09:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:41.542 00:22:41.542 real 0m44.857s 00:22:41.542 user 2m12.507s 00:22:41.542 sys 0m5.196s 00:22:41.542 06:09:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:41.542 ************************************ 00:22:41.542 END TEST nvmf_timeout 00:22:41.542 06:09:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:41.542 ************************************ 00:22:41.542 06:09:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:41.542 06:09:33 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:22:41.542 06:09:33 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:22:41.542 06:09:33 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:41.542 06:09:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:41.542 06:09:33 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:22:41.542 00:22:41.542 real 14m8.461s 00:22:41.542 user 37m32.583s 00:22:41.542 sys 4m3.491s 00:22:41.542 06:09:33 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:41.542 06:09:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:41.542 ************************************ 00:22:41.542 END TEST nvmf_tcp 00:22:41.542 ************************************ 00:22:41.801 06:09:33 -- common/autotest_common.sh@1142 -- # return 0 00:22:41.801 06:09:33 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:22:41.801 06:09:33 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:41.801 06:09:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:41.801 06:09:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:41.801 06:09:33 -- common/autotest_common.sh@10 -- # set +x 00:22:41.801 ************************************ 00:22:41.801 START TEST nvmf_dif 00:22:41.801 ************************************ 00:22:41.801 06:09:33 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:41.801 * Looking for test storage... 00:22:41.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:41.801 06:09:33 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:41.801 06:09:33 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.801 06:09:33 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.801 06:09:33 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.801 06:09:33 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.801 06:09:33 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.801 06:09:33 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.801 06:09:33 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:41.801 06:09:33 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:41.801 06:09:33 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:41.801 06:09:33 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:41.801 06:09:33 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:41.801 06:09:33 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:41.801 06:09:33 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.801 06:09:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:41.801 06:09:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:41.801 06:09:33 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:41.802 06:09:33 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:41.802 Cannot find device "nvmf_tgt_br" 00:22:41.802 06:09:33 nvmf_dif -- nvmf/common.sh@155 -- # true 00:22:41.802 06:09:33 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:41.802 Cannot find device "nvmf_tgt_br2" 00:22:41.802 06:09:33 nvmf_dif -- nvmf/common.sh@156 -- # true 00:22:41.802 06:09:33 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:41.802 06:09:33 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:41.802 Cannot find device "nvmf_tgt_br" 00:22:41.802 06:09:33 nvmf_dif -- nvmf/common.sh@158 -- # true 00:22:41.802 06:09:33 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:41.802 Cannot find device "nvmf_tgt_br2" 00:22:41.802 06:09:33 nvmf_dif -- nvmf/common.sh@159 -- # true 00:22:41.802 06:09:33 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:42.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:42.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:42.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:22:42.061 00:22:42.061 --- 10.0.0.2 ping statistics --- 00:22:42.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.061 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:42.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:42.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:22:42.061 00:22:42.061 --- 10.0.0.3 ping statistics --- 00:22:42.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.061 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:42.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:22:42.061 00:22:42.061 --- 10.0.0.1 ping statistics --- 00:22:42.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.061 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:22:42.061 06:09:33 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:42.630 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:42.630 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:42.630 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:42.630 06:09:34 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.630 06:09:34 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:42.630 06:09:34 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:42.630 06:09:34 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.630 06:09:34 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:42.630 06:09:34 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:42.630 06:09:34 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:42.630 06:09:34 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:42.630 06:09:34 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:42.630 06:09:34 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:42.630 06:09:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:42.630 06:09:34 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=96834 00:22:42.630 06:09:34 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 96834 00:22:42.630 06:09:34 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:42.630 06:09:34 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 96834 ']' 00:22:42.630 06:09:34 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.630 06:09:34 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:42.630 06:09:34 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.630 06:09:34 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:42.630 06:09:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:42.630 [2024-07-13 06:09:34.245562] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:42.630 [2024-07-13 06:09:34.245684] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.889 [2024-07-13 06:09:34.386018] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.889 [2024-07-13 06:09:34.428785] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.889 [2024-07-13 06:09:34.428853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.889 [2024-07-13 06:09:34.428876] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.889 [2024-07-13 06:09:34.428897] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.889 [2024-07-13 06:09:34.428906] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.889 [2024-07-13 06:09:34.428934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.889 [2024-07-13 06:09:34.464221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:42.889 06:09:34 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:42.889 06:09:34 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:22:42.889 06:09:34 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:42.889 06:09:34 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:42.889 06:09:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:42.889 06:09:34 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.889 06:09:34 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:42.889 06:09:34 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:42.889 06:09:34 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.889 06:09:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:42.889 [2024-07-13 06:09:34.554791] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.889 06:09:34 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.889 06:09:34 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:42.889 06:09:34 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:42.889 06:09:34 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.889 06:09:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:42.889 ************************************ 00:22:42.889 START TEST fio_dif_1_default 00:22:42.889 ************************************ 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:42.889 bdev_null0 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.889 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:42.890 [2024-07-13 06:09:34.598959] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:42.890 { 00:22:42.890 "params": { 00:22:42.890 "name": "Nvme$subsystem", 00:22:42.890 "trtype": "$TEST_TRANSPORT", 00:22:42.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.890 "adrfam": "ipv4", 00:22:42.890 "trsvcid": "$NVMF_PORT", 00:22:42.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.890 "hdgst": ${hdgst:-false}, 00:22:42.890 "ddgst": ${ddgst:-false} 00:22:42.890 }, 00:22:42.890 "method": "bdev_nvme_attach_controller" 00:22:42.890 } 00:22:42.890 EOF 00:22:42.890 )") 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:42.890 06:09:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:22:43.149 06:09:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:22:43.149 06:09:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:43.149 "params": { 00:22:43.149 "name": "Nvme0", 00:22:43.149 "trtype": "tcp", 00:22:43.149 "traddr": "10.0.0.2", 00:22:43.149 "adrfam": "ipv4", 00:22:43.149 "trsvcid": "4420", 00:22:43.149 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:43.149 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:43.149 "hdgst": false, 00:22:43.149 "ddgst": false 00:22:43.149 }, 00:22:43.149 "method": "bdev_nvme_attach_controller" 00:22:43.149 }' 00:22:43.149 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:43.149 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:43.149 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:43.149 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:43.149 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:43.149 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:43.149 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:43.149 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:43.149 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:43.149 06:09:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:43.149 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:43.149 fio-3.35 00:22:43.149 Starting 1 thread 00:22:55.387 00:22:55.387 filename0: (groupid=0, jobs=1): err= 0: pid=96893: Sat Jul 13 06:09:45 2024 00:22:55.387 read: IOPS=7967, BW=31.1MiB/s (32.6MB/s)(311MiB/10001msec) 00:22:55.387 slat (nsec): min=6810, max=55602, avg=9590.71, stdev=4364.49 00:22:55.387 clat (usec): min=369, max=3683, avg=474.03, stdev=45.54 00:22:55.387 lat (usec): min=377, max=3712, avg=483.62, stdev=46.15 00:22:55.387 clat percentiles (usec): 00:22:55.387 | 1.00th=[ 400], 5.00th=[ 416], 10.00th=[ 429], 20.00th=[ 441], 00:22:55.387 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 482], 00:22:55.387 | 70.00th=[ 490], 80.00th=[ 502], 90.00th=[ 523], 95.00th=[ 537], 00:22:55.387 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 627], 99.95th=[ 668], 00:22:55.387 | 99.99th=[ 1500] 00:22:55.387 bw ( KiB/s): min=30464, max=32288, per=100.00%, avg=31897.26, stdev=406.68, samples=19 00:22:55.387 iops : min= 7616, max= 8072, avg=7974.32, stdev=101.67, samples=19 00:22:55.387 lat (usec) : 500=77.62%, 750=22.34% 00:22:55.387 lat (msec) : 2=0.04%, 4=0.01% 00:22:55.387 cpu : usr=85.22%, sys=12.83%, ctx=20, majf=0, minf=0 00:22:55.387 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:55.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.387 issued rwts: total=79684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.387 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:55.387 00:22:55.387 Run status group 0 (all jobs): 00:22:55.387 READ: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=311MiB (326MB), run=10001-10001msec 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.387 00:22:55.387 real 0m10.865s 00:22:55.387 user 0m9.054s 00:22:55.387 sys 0m1.528s 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:55.387 ************************************ 00:22:55.387 END TEST fio_dif_1_default 00:22:55.387 ************************************ 00:22:55.387 06:09:45 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:55.387 06:09:45 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:55.387 06:09:45 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:55.387 06:09:45 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.387 06:09:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:55.387 ************************************ 00:22:55.387 START TEST fio_dif_1_multi_subsystems 00:22:55.387 ************************************ 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:55.387 bdev_null0 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:55.387 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:55.388 [2024-07-13 06:09:45.518351] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:55.388 bdev_null1 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:55.388 { 00:22:55.388 "params": { 00:22:55.388 "name": "Nvme$subsystem", 00:22:55.388 "trtype": "$TEST_TRANSPORT", 00:22:55.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.388 "adrfam": "ipv4", 00:22:55.388 "trsvcid": "$NVMF_PORT", 00:22:55.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.388 "hdgst": ${hdgst:-false}, 00:22:55.388 "ddgst": ${ddgst:-false} 00:22:55.388 }, 00:22:55.388 "method": "bdev_nvme_attach_controller" 00:22:55.388 } 00:22:55.388 EOF 00:22:55.388 )") 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:55.388 { 00:22:55.388 "params": { 00:22:55.388 "name": "Nvme$subsystem", 00:22:55.388 "trtype": "$TEST_TRANSPORT", 00:22:55.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.388 "adrfam": "ipv4", 00:22:55.388 "trsvcid": "$NVMF_PORT", 00:22:55.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.388 "hdgst": ${hdgst:-false}, 00:22:55.388 "ddgst": ${ddgst:-false} 00:22:55.388 }, 00:22:55.388 "method": "bdev_nvme_attach_controller" 00:22:55.388 } 00:22:55.388 EOF 00:22:55.388 )") 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:55.388 "params": { 00:22:55.388 "name": "Nvme0", 00:22:55.388 "trtype": "tcp", 00:22:55.388 "traddr": "10.0.0.2", 00:22:55.388 "adrfam": "ipv4", 00:22:55.388 "trsvcid": "4420", 00:22:55.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:55.388 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:55.388 "hdgst": false, 00:22:55.388 "ddgst": false 00:22:55.388 }, 00:22:55.388 "method": "bdev_nvme_attach_controller" 00:22:55.388 },{ 00:22:55.388 "params": { 00:22:55.388 "name": "Nvme1", 00:22:55.388 "trtype": "tcp", 00:22:55.388 "traddr": "10.0.0.2", 00:22:55.388 "adrfam": "ipv4", 00:22:55.388 "trsvcid": "4420", 00:22:55.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.388 "hdgst": false, 00:22:55.388 "ddgst": false 00:22:55.388 }, 00:22:55.388 "method": "bdev_nvme_attach_controller" 00:22:55.388 }' 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:55.388 06:09:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:55.388 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:55.388 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:55.388 fio-3.35 00:22:55.388 Starting 2 threads 00:23:05.363 00:23:05.363 filename0: (groupid=0, jobs=1): err= 0: pid=97051: Sat Jul 13 06:09:56 2024 00:23:05.363 read: IOPS=4327, BW=16.9MiB/s (17.7MB/s)(169MiB/10001msec) 00:23:05.363 slat (nsec): min=7126, max=78833, avg=15415.36, stdev=5887.02 00:23:05.363 clat (usec): min=614, max=2765, avg=882.79, stdev=66.06 00:23:05.363 lat (usec): min=622, max=2792, avg=898.20, stdev=67.40 00:23:05.363 clat percentiles (usec): 00:23:05.363 | 1.00th=[ 742], 5.00th=[ 783], 10.00th=[ 807], 20.00th=[ 832], 00:23:05.363 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[ 881], 60.00th=[ 898], 00:23:05.363 | 70.00th=[ 914], 80.00th=[ 938], 90.00th=[ 963], 95.00th=[ 988], 00:23:05.363 | 99.00th=[ 1029], 99.50th=[ 1057], 99.90th=[ 1106], 99.95th=[ 1483], 00:23:05.363 | 99.99th=[ 1598] 00:23:05.363 bw ( KiB/s): min=16992, max=17472, per=50.00%, avg=17310.32, stdev=139.27, samples=19 00:23:05.363 iops : min= 4248, max= 4368, avg=4327.58, stdev=34.82, samples=19 00:23:05.363 lat (usec) : 750=1.44%, 1000=95.40% 00:23:05.363 lat (msec) : 2=3.15%, 4=0.01% 00:23:05.363 cpu : usr=89.60%, sys=8.94%, ctx=68, majf=0, minf=9 00:23:05.363 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:05.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.363 issued rwts: total=43276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.363 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:05.363 filename1: (groupid=0, jobs=1): err= 0: pid=97052: Sat Jul 13 06:09:56 2024 00:23:05.363 read: IOPS=4327, BW=16.9MiB/s (17.7MB/s)(169MiB/10001msec) 00:23:05.363 slat (nsec): min=6931, max=74856, avg=15412.59, stdev=5934.71 00:23:05.363 clat (usec): min=544, max=2996, avg=882.33, stdev=58.23 00:23:05.363 lat (usec): min=552, max=3024, avg=897.74, stdev=58.87 00:23:05.363 clat percentiles (usec): 00:23:05.363 | 1.00th=[ 775], 5.00th=[ 799], 10.00th=[ 816], 20.00th=[ 840], 00:23:05.363 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[ 881], 60.00th=[ 889], 00:23:05.363 | 70.00th=[ 906], 80.00th=[ 930], 90.00th=[ 955], 95.00th=[ 971], 00:23:05.363 | 99.00th=[ 1020], 99.50th=[ 1037], 99.90th=[ 1090], 99.95th=[ 1467], 00:23:05.363 | 99.99th=[ 1598] 00:23:05.363 bw ( KiB/s): min=16992, max=17472, per=50.01%, avg=17312.00, stdev=141.91, samples=19 00:23:05.363 iops : min= 4248, max= 4368, avg=4328.00, stdev=35.48, samples=19 00:23:05.363 lat (usec) : 750=0.09%, 1000=97.86% 00:23:05.363 lat (msec) : 2=2.04%, 4=0.01% 00:23:05.363 cpu : usr=89.71%, sys=8.90%, ctx=25, majf=0, minf=0 00:23:05.363 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:05.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.363 issued rwts: total=43277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.363 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:05.363 00:23:05.363 Run status group 0 (all jobs): 00:23:05.363 READ: bw=33.8MiB/s (35.4MB/s), 16.9MiB/s-16.9MiB/s (17.7MB/s-17.7MB/s), io=338MiB (355MB), run=10001-10001msec 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:05.363 ************************************ 00:23:05.363 END TEST fio_dif_1_multi_subsystems 00:23:05.363 ************************************ 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.363 00:23:05.363 real 0m10.975s 00:23:05.363 user 0m18.542s 00:23:05.363 sys 0m2.055s 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:05.363 06:09:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:05.363 06:09:56 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:05.363 06:09:56 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:05.363 06:09:56 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:05.363 06:09:56 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.363 06:09:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:05.363 ************************************ 00:23:05.363 START TEST fio_dif_rand_params 00:23:05.363 ************************************ 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:05.363 bdev_null0 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.363 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:05.364 [2024-07-13 06:09:56.552456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.364 { 00:23:05.364 "params": { 00:23:05.364 "name": "Nvme$subsystem", 00:23:05.364 "trtype": "$TEST_TRANSPORT", 00:23:05.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.364 "adrfam": "ipv4", 00:23:05.364 "trsvcid": "$NVMF_PORT", 00:23:05.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.364 "hdgst": ${hdgst:-false}, 00:23:05.364 "ddgst": ${ddgst:-false} 00:23:05.364 }, 00:23:05.364 "method": "bdev_nvme_attach_controller" 00:23:05.364 } 00:23:05.364 EOF 00:23:05.364 )") 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:05.364 "params": { 00:23:05.364 "name": "Nvme0", 00:23:05.364 "trtype": "tcp", 00:23:05.364 "traddr": "10.0.0.2", 00:23:05.364 "adrfam": "ipv4", 00:23:05.364 "trsvcid": "4420", 00:23:05.364 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:05.364 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:05.364 "hdgst": false, 00:23:05.364 "ddgst": false 00:23:05.364 }, 00:23:05.364 "method": "bdev_nvme_attach_controller" 00:23:05.364 }' 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:05.364 06:09:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:05.364 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:05.364 ... 00:23:05.364 fio-3.35 00:23:05.364 Starting 3 threads 00:23:10.655 00:23:10.655 filename0: (groupid=0, jobs=1): err= 0: pid=97198: Sat Jul 13 06:10:02 2024 00:23:10.655 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(162MiB/5002msec) 00:23:10.655 slat (nsec): min=7288, max=44161, avg=13540.39, stdev=4064.65 00:23:10.655 clat (usec): min=10215, max=14380, avg=11545.94, stdev=767.96 00:23:10.655 lat (usec): min=10228, max=14407, avg=11559.48, stdev=767.72 00:23:10.655 clat percentiles (usec): 00:23:10.655 | 1.00th=[10290], 5.00th=[10552], 10.00th=[10683], 20.00th=[10814], 00:23:10.656 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:23:10.656 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[12911], 00:23:10.656 | 99.00th=[13435], 99.50th=[13566], 99.90th=[14353], 99.95th=[14353], 00:23:10.656 | 99.99th=[14353] 00:23:10.656 bw ( KiB/s): min=29952, max=35328, per=33.19%, avg=33024.00, stdev=2067.90, samples=9 00:23:10.656 iops : min= 234, max= 276, avg=258.00, stdev=16.16, samples=9 00:23:10.656 lat (msec) : 20=100.00% 00:23:10.656 cpu : usr=91.12%, sys=8.18%, ctx=88, majf=0, minf=0 00:23:10.656 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:10.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.656 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.656 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:10.656 filename0: (groupid=0, jobs=1): err= 0: pid=97199: Sat Jul 13 06:10:02 2024 00:23:10.656 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(162MiB/5005msec) 00:23:10.656 slat (nsec): min=6778, max=40576, avg=10219.01, stdev=4375.98 00:23:10.656 clat (usec): min=4984, max=13562, avg=11532.30, stdev=818.02 00:23:10.656 lat (usec): min=4991, max=13580, avg=11542.52, stdev=818.85 00:23:10.656 clat percentiles (usec): 00:23:10.656 | 1.00th=[10290], 5.00th=[10552], 10.00th=[10683], 20.00th=[10814], 00:23:10.656 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:23:10.656 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12649], 95.00th=[12911], 00:23:10.656 | 99.00th=[13435], 99.50th=[13435], 99.90th=[13566], 99.95th=[13566], 00:23:10.656 | 99.99th=[13566] 00:23:10.656 bw ( KiB/s): min=29892, max=35328, per=33.18%, avg=33017.33, stdev=2043.34, samples=9 00:23:10.656 iops : min= 233, max= 276, avg=257.89, stdev=16.07, samples=9 00:23:10.656 lat (msec) : 10=0.23%, 20=99.77% 00:23:10.656 cpu : usr=91.97%, sys=7.43%, ctx=93, majf=0, minf=0 00:23:10.656 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:10.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.656 issued rwts: total=1299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.656 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:10.656 filename0: (groupid=0, jobs=1): err= 0: pid=97200: Sat Jul 13 06:10:02 2024 00:23:10.656 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(162MiB/5001msec) 00:23:10.656 slat (nsec): min=7264, max=45436, avg=13997.33, stdev=4292.09 00:23:10.656 clat (usec): min=10209, max=15587, avg=11543.72, stdev=779.64 00:23:10.656 lat (usec): min=10221, max=15612, avg=11557.72, stdev=779.23 00:23:10.656 clat percentiles (usec): 00:23:10.656 | 1.00th=[10290], 5.00th=[10552], 10.00th=[10683], 20.00th=[10814], 00:23:10.656 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:23:10.656 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[12911], 00:23:10.656 | 99.00th=[13304], 99.50th=[13566], 99.90th=[15533], 99.95th=[15533], 00:23:10.656 | 99.99th=[15533] 00:23:10.656 bw ( KiB/s): min=29242, max=35328, per=33.19%, avg=33030.44, stdev=2090.07, samples=9 00:23:10.656 iops : min= 228, max= 276, avg=258.00, stdev=16.43, samples=9 00:23:10.656 lat (msec) : 20=100.00% 00:23:10.656 cpu : usr=90.94%, sys=8.36%, ctx=49, majf=0, minf=9 00:23:10.656 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:10.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.656 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.656 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:10.656 00:23:10.656 Run status group 0 (all jobs): 00:23:10.656 READ: bw=97.2MiB/s (102MB/s), 32.4MiB/s-32.4MiB/s (34.0MB/s-34.0MB/s), io=486MiB (510MB), run=5001-5005msec 00:23:10.656 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:10.656 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:10.656 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:10.656 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:10.656 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:10.656 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:10.656 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.656 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.656 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.656 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:10.656 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.656 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.915 bdev_null0 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.915 [2024-07-13 06:10:02.416188] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.915 bdev_null1 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.915 bdev_null2 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.915 06:10:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.915 { 00:23:10.915 "params": { 00:23:10.915 "name": "Nvme$subsystem", 00:23:10.915 "trtype": "$TEST_TRANSPORT", 00:23:10.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.916 "adrfam": "ipv4", 00:23:10.916 "trsvcid": "$NVMF_PORT", 00:23:10.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.916 "hdgst": ${hdgst:-false}, 00:23:10.916 "ddgst": ${ddgst:-false} 00:23:10.916 }, 00:23:10.916 "method": "bdev_nvme_attach_controller" 00:23:10.916 } 00:23:10.916 EOF 00:23:10.916 )") 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.916 { 00:23:10.916 "params": { 00:23:10.916 "name": "Nvme$subsystem", 00:23:10.916 "trtype": "$TEST_TRANSPORT", 00:23:10.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.916 "adrfam": "ipv4", 00:23:10.916 "trsvcid": "$NVMF_PORT", 00:23:10.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.916 "hdgst": ${hdgst:-false}, 00:23:10.916 "ddgst": ${ddgst:-false} 00:23:10.916 }, 00:23:10.916 "method": "bdev_nvme_attach_controller" 00:23:10.916 } 00:23:10.916 EOF 00:23:10.916 )") 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.916 { 00:23:10.916 "params": { 00:23:10.916 "name": "Nvme$subsystem", 00:23:10.916 "trtype": "$TEST_TRANSPORT", 00:23:10.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.916 "adrfam": "ipv4", 00:23:10.916 "trsvcid": "$NVMF_PORT", 00:23:10.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.916 "hdgst": ${hdgst:-false}, 00:23:10.916 "ddgst": ${ddgst:-false} 00:23:10.916 }, 00:23:10.916 "method": "bdev_nvme_attach_controller" 00:23:10.916 } 00:23:10.916 EOF 00:23:10.916 )") 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:10.916 "params": { 00:23:10.916 "name": "Nvme0", 00:23:10.916 "trtype": "tcp", 00:23:10.916 "traddr": "10.0.0.2", 00:23:10.916 "adrfam": "ipv4", 00:23:10.916 "trsvcid": "4420", 00:23:10.916 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:10.916 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:10.916 "hdgst": false, 00:23:10.916 "ddgst": false 00:23:10.916 }, 00:23:10.916 "method": "bdev_nvme_attach_controller" 00:23:10.916 },{ 00:23:10.916 "params": { 00:23:10.916 "name": "Nvme1", 00:23:10.916 "trtype": "tcp", 00:23:10.916 "traddr": "10.0.0.2", 00:23:10.916 "adrfam": "ipv4", 00:23:10.916 "trsvcid": "4420", 00:23:10.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.916 "hdgst": false, 00:23:10.916 "ddgst": false 00:23:10.916 }, 00:23:10.916 "method": "bdev_nvme_attach_controller" 00:23:10.916 },{ 00:23:10.916 "params": { 00:23:10.916 "name": "Nvme2", 00:23:10.916 "trtype": "tcp", 00:23:10.916 "traddr": "10.0.0.2", 00:23:10.916 "adrfam": "ipv4", 00:23:10.916 "trsvcid": "4420", 00:23:10.916 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:10.916 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:10.916 "hdgst": false, 00:23:10.916 "ddgst": false 00:23:10.916 }, 00:23:10.916 "method": "bdev_nvme_attach_controller" 00:23:10.916 }' 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:10.916 06:10:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:11.175 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:11.175 ... 00:23:11.175 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:11.175 ... 00:23:11.175 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:11.175 ... 00:23:11.175 fio-3.35 00:23:11.175 Starting 24 threads 00:23:23.376 00:23:23.376 filename0: (groupid=0, jobs=1): err= 0: pid=97295: Sat Jul 13 06:10:13 2024 00:23:23.376 read: IOPS=207, BW=830KiB/s (850kB/s)(8344KiB/10047msec) 00:23:23.376 slat (usec): min=4, max=8021, avg=21.52, stdev=247.91 00:23:23.376 clat (msec): min=14, max=169, avg=76.88, stdev=23.14 00:23:23.376 lat (msec): min=14, max=169, avg=76.90, stdev=23.13 00:23:23.376 clat percentiles (msec): 00:23:23.376 | 1.00th=[ 17], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:23:23.376 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 80], 00:23:23.376 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 118], 00:23:23.376 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 171], 00:23:23.376 | 99.99th=[ 171] 00:23:23.376 bw ( KiB/s): min= 512, max= 1136, per=4.05%, avg=828.00, stdev=151.07, samples=20 00:23:23.376 iops : min= 128, max= 284, avg=207.00, stdev=37.77, samples=20 00:23:23.376 lat (msec) : 20=1.44%, 50=13.42%, 100=68.50%, 250=16.63% 00:23:23.376 cpu : usr=34.46%, sys=2.24%, ctx=978, majf=0, minf=9 00:23:23.376 IO depths : 1=0.1%, 2=1.8%, 4=7.0%, 8=75.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:23:23.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.376 complete : 0=0.0%, 4=89.3%, 8=9.2%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.376 issued rwts: total=2086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.376 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.376 filename0: (groupid=0, jobs=1): err= 0: pid=97296: Sat Jul 13 06:10:13 2024 00:23:23.376 read: IOPS=222, BW=892KiB/s (913kB/s)(8928KiB/10010msec) 00:23:23.376 slat (usec): min=3, max=4046, avg=17.27, stdev=85.46 00:23:23.376 clat (msec): min=16, max=174, avg=71.67, stdev=21.62 00:23:23.376 lat (msec): min=16, max=174, avg=71.69, stdev=21.62 00:23:23.376 clat percentiles (msec): 00:23:23.376 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 50], 00:23:23.376 | 30.00th=[ 57], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:23:23.376 | 70.00th=[ 80], 80.00th=[ 88], 90.00th=[ 105], 95.00th=[ 110], 00:23:23.376 | 99.00th=[ 120], 99.50th=[ 163], 99.90th=[ 163], 99.95th=[ 176], 00:23:23.376 | 99.99th=[ 176] 00:23:23.376 bw ( KiB/s): min= 608, max= 1104, per=4.35%, avg=889.20, stdev=122.36, samples=20 00:23:23.376 iops : min= 152, max= 276, avg=222.30, stdev=30.59, samples=20 00:23:23.376 lat (msec) : 20=0.27%, 50=20.34%, 100=66.89%, 250=12.50% 00:23:23.376 cpu : usr=38.72%, sys=2.34%, ctx=1319, majf=0, minf=9 00:23:23.376 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:23.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.376 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.376 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.376 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.376 filename0: (groupid=0, jobs=1): err= 0: pid=97297: Sat Jul 13 06:10:13 2024 00:23:23.376 read: IOPS=216, BW=865KiB/s (885kB/s)(8668KiB/10026msec) 00:23:23.376 slat (usec): min=3, max=8037, avg=31.64, stdev=333.17 00:23:23.376 clat (msec): min=26, max=144, avg=73.86, stdev=20.78 00:23:23.376 lat (msec): min=26, max=144, avg=73.89, stdev=20.78 00:23:23.376 clat percentiles (msec): 00:23:23.376 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 53], 00:23:23.376 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:23:23.376 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 109], 00:23:23.376 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 140], 99.95th=[ 144], 00:23:23.376 | 99.99th=[ 144] 00:23:23.376 bw ( KiB/s): min= 656, max= 1000, per=4.21%, avg=860.40, stdev=116.73, samples=20 00:23:23.376 iops : min= 164, max= 250, avg=215.10, stdev=29.18, samples=20 00:23:23.376 lat (msec) : 50=17.21%, 100=69.31%, 250=13.47% 00:23:23.376 cpu : usr=35.90%, sys=2.29%, ctx=1058, majf=0, minf=9 00:23:23.376 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:23.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.376 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.376 issued rwts: total=2167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.376 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.376 filename0: (groupid=0, jobs=1): err= 0: pid=97298: Sat Jul 13 06:10:13 2024 00:23:23.376 read: IOPS=216, BW=867KiB/s (888kB/s)(8692KiB/10025msec) 00:23:23.376 slat (usec): min=4, max=7027, avg=28.96, stdev=248.15 00:23:23.376 clat (msec): min=28, max=144, avg=73.63, stdev=20.84 00:23:23.376 lat (msec): min=28, max=144, avg=73.66, stdev=20.83 00:23:23.376 clat percentiles (msec): 00:23:23.376 | 1.00th=[ 41], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:23:23.376 | 30.00th=[ 58], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:23:23.376 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 109], 00:23:23.376 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 144], 00:23:23.376 | 99.99th=[ 144] 00:23:23.376 bw ( KiB/s): min= 640, max= 992, per=4.22%, avg=862.55, stdev=111.66, samples=20 00:23:23.376 iops : min= 160, max= 248, avg=215.60, stdev=27.95, samples=20 00:23:23.376 lat (msec) : 50=16.06%, 100=70.50%, 250=13.44% 00:23:23.376 cpu : usr=40.14%, sys=2.16%, ctx=1354, majf=0, minf=9 00:23:23.376 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:23.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.376 complete : 0=0.0%, 4=87.6%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.376 issued rwts: total=2173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.376 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.376 filename0: (groupid=0, jobs=1): err= 0: pid=97299: Sat Jul 13 06:10:13 2024 00:23:23.376 read: IOPS=215, BW=861KiB/s (881kB/s)(8644KiB/10044msec) 00:23:23.376 slat (usec): min=7, max=8021, avg=22.47, stdev=227.95 00:23:23.376 clat (msec): min=2, max=139, avg=74.14, stdev=22.68 00:23:23.376 lat (msec): min=2, max=139, avg=74.16, stdev=22.68 00:23:23.376 clat percentiles (msec): 00:23:23.376 | 1.00th=[ 6], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 57], 00:23:23.376 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:23:23.376 | 70.00th=[ 83], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 112], 00:23:23.376 | 99.00th=[ 125], 99.50th=[ 131], 99.90th=[ 140], 99.95th=[ 140], 00:23:23.376 | 99.99th=[ 140] 00:23:23.376 bw ( KiB/s): min= 592, max= 1386, per=4.21%, avg=860.10, stdev=156.59, samples=20 00:23:23.376 iops : min= 148, max= 346, avg=215.00, stdev=39.06, samples=20 00:23:23.376 lat (msec) : 4=0.74%, 10=0.74%, 20=1.48%, 50=11.57%, 100=71.12% 00:23:23.376 lat (msec) : 250=14.35% 00:23:23.376 cpu : usr=38.21%, sys=2.29%, ctx=1198, majf=0, minf=0 00:23:23.376 IO depths : 1=0.1%, 2=0.8%, 4=2.9%, 8=79.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:23.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.376 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.376 issued rwts: total=2161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.376 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.376 filename0: (groupid=0, jobs=1): err= 0: pid=97300: Sat Jul 13 06:10:13 2024 00:23:23.376 read: IOPS=221, BW=886KiB/s (907kB/s)(8864KiB/10004msec) 00:23:23.376 slat (usec): min=4, max=7024, avg=22.26, stdev=191.95 00:23:23.376 clat (msec): min=7, max=163, avg=72.13, stdev=22.13 00:23:23.376 lat (msec): min=7, max=163, avg=72.15, stdev=22.13 00:23:23.376 clat percentiles (msec): 00:23:23.376 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:23:23.376 | 30.00th=[ 56], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 75], 00:23:23.376 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 106], 95.00th=[ 111], 00:23:23.376 | 99.00th=[ 132], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 165], 00:23:23.376 | 99.99th=[ 165] 00:23:23.376 bw ( KiB/s): min= 512, max= 1024, per=4.25%, avg=869.47, stdev=134.40, samples=19 00:23:23.376 iops : min= 128, max= 256, avg=217.37, stdev=33.60, samples=19 00:23:23.376 lat (msec) : 10=0.27%, 20=0.27%, 50=19.49%, 100=67.06%, 250=12.91% 00:23:23.376 cpu : usr=41.81%, sys=2.69%, ctx=1392, majf=0, minf=9 00:23:23.376 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.5%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:23.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.376 complete : 0=0.0%, 4=87.0%, 8=12.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.376 issued rwts: total=2216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.376 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.376 filename0: (groupid=0, jobs=1): err= 0: pid=97301: Sat Jul 13 06:10:13 2024 00:23:23.376 read: IOPS=210, BW=844KiB/s (864kB/s)(8444KiB/10010msec) 00:23:23.376 slat (usec): min=4, max=8026, avg=21.71, stdev=246.55 00:23:23.376 clat (msec): min=15, max=158, avg=75.74, stdev=22.91 00:23:23.376 lat (msec): min=15, max=158, avg=75.76, stdev=22.90 00:23:23.376 clat percentiles (msec): 00:23:23.376 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 52], 00:23:23.376 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 79], 00:23:23.376 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:23:23.377 | 99.00th=[ 130], 99.50th=[ 136], 99.90th=[ 155], 99.95th=[ 159], 00:23:23.377 | 99.99th=[ 159] 00:23:23.377 bw ( KiB/s): min= 528, max= 1128, per=4.11%, avg=840.80, stdev=151.45, samples=20 00:23:23.377 iops : min= 132, max= 282, avg=210.20, stdev=37.86, samples=20 00:23:23.377 lat (msec) : 20=0.47%, 50=17.53%, 100=64.61%, 250=17.39% 00:23:23.377 cpu : usr=32.61%, sys=2.08%, ctx=916, majf=0, minf=9 00:23:23.377 IO depths : 1=0.1%, 2=1.9%, 4=7.5%, 8=75.7%, 16=14.8%, 32=0.0%, >=64=0.0% 00:23:23.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 complete : 0=0.0%, 4=88.9%, 8=9.4%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 issued rwts: total=2111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.377 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.377 filename0: (groupid=0, jobs=1): err= 0: pid=97302: Sat Jul 13 06:10:13 2024 00:23:23.377 read: IOPS=212, BW=848KiB/s (869kB/s)(8500KiB/10020msec) 00:23:23.377 slat (usec): min=3, max=5035, avg=19.94, stdev=143.26 00:23:23.377 clat (msec): min=29, max=124, avg=75.30, stdev=21.24 00:23:23.377 lat (msec): min=29, max=124, avg=75.32, stdev=21.24 00:23:23.377 clat percentiles (msec): 00:23:23.377 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 52], 00:23:23.377 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 80], 00:23:23.377 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 112], 00:23:23.377 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 120], 99.95th=[ 126], 00:23:23.377 | 99.99th=[ 126] 00:23:23.377 bw ( KiB/s): min= 528, max= 1000, per=4.13%, avg=843.60, stdev=136.89, samples=20 00:23:23.377 iops : min= 132, max= 250, avg=210.90, stdev=34.22, samples=20 00:23:23.377 lat (msec) : 50=17.36%, 100=66.82%, 250=15.81% 00:23:23.377 cpu : usr=40.14%, sys=2.49%, ctx=1497, majf=0, minf=9 00:23:23.377 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=77.0%, 16=15.0%, 32=0.0%, >=64=0.0% 00:23:23.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 complete : 0=0.0%, 4=88.6%, 8=10.0%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 issued rwts: total=2125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.377 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.377 filename1: (groupid=0, jobs=1): err= 0: pid=97303: Sat Jul 13 06:10:13 2024 00:23:23.377 read: IOPS=214, BW=858KiB/s (879kB/s)(8612KiB/10036msec) 00:23:23.377 slat (usec): min=6, max=8023, avg=23.38, stdev=228.44 00:23:23.377 clat (msec): min=23, max=139, avg=74.41, stdev=20.26 00:23:23.377 lat (msec): min=23, max=139, avg=74.43, stdev=20.27 00:23:23.377 clat percentiles (msec): 00:23:23.377 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:23:23.377 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:23:23.377 | 70.00th=[ 82], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 110], 00:23:23.377 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 130], 00:23:23.377 | 99.99th=[ 140] 00:23:23.377 bw ( KiB/s): min= 664, max= 944, per=4.18%, avg=854.80, stdev=88.37, samples=20 00:23:23.377 iops : min= 166, max= 236, avg=213.70, stdev=22.09, samples=20 00:23:23.377 lat (msec) : 50=16.95%, 100=69.30%, 250=13.75% 00:23:23.377 cpu : usr=39.93%, sys=2.44%, ctx=1190, majf=0, minf=10 00:23:23.377 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.4%, 16=16.4%, 32=0.0%, >=64=0.0% 00:23:23.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 issued rwts: total=2153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.377 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.377 filename1: (groupid=0, jobs=1): err= 0: pid=97304: Sat Jul 13 06:10:13 2024 00:23:23.377 read: IOPS=212, BW=850KiB/s (871kB/s)(8532KiB/10036msec) 00:23:23.377 slat (usec): min=6, max=8022, avg=24.72, stdev=238.51 00:23:23.377 clat (msec): min=22, max=144, avg=75.11, stdev=20.43 00:23:23.377 lat (msec): min=22, max=144, avg=75.13, stdev=20.44 00:23:23.377 clat percentiles (msec): 00:23:23.377 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:23:23.377 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 77], 00:23:23.377 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 110], 00:23:23.377 | 99.00th=[ 121], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:23:23.377 | 99.99th=[ 146] 00:23:23.377 bw ( KiB/s): min= 672, max= 944, per=4.14%, avg=846.80, stdev=83.80, samples=20 00:23:23.377 iops : min= 168, max= 236, avg=211.70, stdev=20.95, samples=20 00:23:23.377 lat (msec) : 50=16.88%, 100=68.73%, 250=14.39% 00:23:23.377 cpu : usr=32.97%, sys=1.97%, ctx=1061, majf=0, minf=9 00:23:23.377 IO depths : 1=0.1%, 2=0.5%, 4=1.6%, 8=81.5%, 16=16.3%, 32=0.0%, >=64=0.0% 00:23:23.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 issued rwts: total=2133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.377 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.377 filename1: (groupid=0, jobs=1): err= 0: pid=97305: Sat Jul 13 06:10:13 2024 00:23:23.377 read: IOPS=215, BW=861KiB/s (881kB/s)(8620KiB/10017msec) 00:23:23.377 slat (usec): min=4, max=8037, avg=26.38, stdev=273.03 00:23:23.377 clat (msec): min=29, max=154, avg=74.22, stdev=21.37 00:23:23.377 lat (msec): min=29, max=154, avg=74.24, stdev=21.37 00:23:23.377 clat percentiles (msec): 00:23:23.377 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 54], 00:23:23.377 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:23:23.377 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 113], 00:23:23.377 | 99.00th=[ 121], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 155], 00:23:23.377 | 99.99th=[ 155] 00:23:23.377 bw ( KiB/s): min= 640, max= 1037, per=4.19%, avg=857.05, stdev=123.17, samples=20 00:23:23.377 iops : min= 160, max= 259, avg=214.25, stdev=30.77, samples=20 00:23:23.377 lat (msec) : 50=16.47%, 100=69.61%, 250=13.92% 00:23:23.377 cpu : usr=37.54%, sys=2.22%, ctx=1115, majf=0, minf=9 00:23:23.377 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:23.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 issued rwts: total=2155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.377 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.377 filename1: (groupid=0, jobs=1): err= 0: pid=97306: Sat Jul 13 06:10:13 2024 00:23:23.377 read: IOPS=215, BW=861KiB/s (881kB/s)(8652KiB/10051msec) 00:23:23.377 slat (usec): min=3, max=8024, avg=17.42, stdev=172.30 00:23:23.377 clat (msec): min=7, max=141, avg=74.13, stdev=21.97 00:23:23.377 lat (msec): min=7, max=141, avg=74.15, stdev=21.97 00:23:23.377 clat percentiles (msec): 00:23:23.377 | 1.00th=[ 9], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:23:23.377 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 77], 00:23:23.377 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 110], 00:23:23.377 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 131], 99.95th=[ 132], 00:23:23.377 | 99.99th=[ 142] 00:23:23.377 bw ( KiB/s): min= 688, max= 1357, per=4.21%, avg=861.05, stdev=140.28, samples=20 00:23:23.377 iops : min= 172, max= 339, avg=215.25, stdev=35.02, samples=20 00:23:23.377 lat (msec) : 10=1.39%, 20=1.57%, 50=12.62%, 100=70.87%, 250=13.55% 00:23:23.377 cpu : usr=34.58%, sys=1.80%, ctx=1046, majf=0, minf=9 00:23:23.377 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=81.6%, 16=16.6%, 32=0.0%, >=64=0.0% 00:23:23.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 issued rwts: total=2163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.377 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.377 filename1: (groupid=0, jobs=1): err= 0: pid=97307: Sat Jul 13 06:10:13 2024 00:23:23.377 read: IOPS=217, BW=872KiB/s (893kB/s)(8752KiB/10039msec) 00:23:23.377 slat (usec): min=4, max=8024, avg=22.64, stdev=250.75 00:23:23.377 clat (msec): min=8, max=156, avg=73.22, stdev=20.96 00:23:23.377 lat (msec): min=8, max=156, avg=73.25, stdev=20.97 00:23:23.377 clat percentiles (msec): 00:23:23.377 | 1.00th=[ 11], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:23:23.377 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:23:23.377 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 105], 95.00th=[ 109], 00:23:23.377 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 136], 00:23:23.377 | 99.99th=[ 157] 00:23:23.377 bw ( KiB/s): min= 688, max= 1248, per=4.26%, avg=871.20, stdev=119.02, samples=20 00:23:23.377 iops : min= 172, max= 312, avg=217.80, stdev=29.76, samples=20 00:23:23.377 lat (msec) : 10=0.73%, 20=1.46%, 50=14.31%, 100=71.94%, 250=11.56% 00:23:23.377 cpu : usr=34.66%, sys=2.28%, ctx=987, majf=0, minf=9 00:23:23.377 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.2%, 16=16.4%, 32=0.0%, >=64=0.0% 00:23:23.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 issued rwts: total=2188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.377 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.377 filename1: (groupid=0, jobs=1): err= 0: pid=97308: Sat Jul 13 06:10:13 2024 00:23:23.377 read: IOPS=199, BW=799KiB/s (818kB/s)(8016KiB/10036msec) 00:23:23.377 slat (usec): min=6, max=8028, avg=25.60, stdev=309.87 00:23:23.377 clat (msec): min=18, max=156, avg=79.85, stdev=23.89 00:23:23.377 lat (msec): min=18, max=156, avg=79.88, stdev=23.90 00:23:23.377 clat percentiles (msec): 00:23:23.377 | 1.00th=[ 20], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:23:23.377 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 83], 00:23:23.377 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 110], 95.00th=[ 121], 00:23:23.377 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:23:23.377 | 99.99th=[ 157] 00:23:23.377 bw ( KiB/s): min= 512, max= 1008, per=3.89%, avg=795.20, stdev=139.42, samples=20 00:23:23.377 iops : min= 128, max= 252, avg=198.80, stdev=34.86, samples=20 00:23:23.377 lat (msec) : 20=1.40%, 50=12.23%, 100=66.97%, 250=19.41% 00:23:23.377 cpu : usr=32.80%, sys=2.00%, ctx=920, majf=0, minf=9 00:23:23.377 IO depths : 1=0.1%, 2=2.5%, 4=10.1%, 8=72.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:23:23.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 complete : 0=0.0%, 4=90.2%, 8=7.6%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.377 issued rwts: total=2004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.377 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.377 filename1: (groupid=0, jobs=1): err= 0: pid=97309: Sat Jul 13 06:10:13 2024 00:23:23.377 read: IOPS=198, BW=795KiB/s (814kB/s)(7972KiB/10032msec) 00:23:23.377 slat (usec): min=3, max=8025, avg=20.14, stdev=200.76 00:23:23.377 clat (msec): min=24, max=156, avg=80.37, stdev=21.55 00:23:23.377 lat (msec): min=24, max=156, avg=80.39, stdev=21.55 00:23:23.377 clat percentiles (msec): 00:23:23.377 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 63], 00:23:23.377 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:23:23.377 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 109], 95.00th=[ 116], 00:23:23.378 | 99.00th=[ 132], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 157], 00:23:23.378 | 99.99th=[ 157] 00:23:23.378 bw ( KiB/s): min= 512, max= 920, per=3.87%, avg=790.80, stdev=130.20, samples=20 00:23:23.378 iops : min= 128, max= 230, avg=197.70, stdev=32.55, samples=20 00:23:23.378 lat (msec) : 50=10.89%, 100=69.44%, 250=19.67% 00:23:23.378 cpu : usr=34.77%, sys=2.24%, ctx=981, majf=0, minf=9 00:23:23.378 IO depths : 1=0.1%, 2=3.0%, 4=11.9%, 8=70.4%, 16=14.7%, 32=0.0%, >=64=0.0% 00:23:23.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 complete : 0=0.0%, 4=90.7%, 8=6.7%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 issued rwts: total=1993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.378 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.378 filename1: (groupid=0, jobs=1): err= 0: pid=97310: Sat Jul 13 06:10:13 2024 00:23:23.378 read: IOPS=215, BW=860KiB/s (881kB/s)(8632KiB/10034msec) 00:23:23.378 slat (usec): min=4, max=8027, avg=21.99, stdev=203.50 00:23:23.378 clat (msec): min=35, max=144, avg=74.23, stdev=20.24 00:23:23.378 lat (msec): min=35, max=144, avg=74.25, stdev=20.24 00:23:23.378 clat percentiles (msec): 00:23:23.378 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 52], 00:23:23.378 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:23:23.378 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 109], 00:23:23.378 | 99.00th=[ 120], 99.50th=[ 133], 99.90th=[ 136], 99.95th=[ 144], 00:23:23.378 | 99.99th=[ 144] 00:23:23.378 bw ( KiB/s): min= 648, max= 976, per=4.19%, avg=856.80, stdev=99.61, samples=20 00:23:23.378 iops : min= 162, max= 244, avg=214.20, stdev=24.90, samples=20 00:23:23.378 lat (msec) : 50=17.70%, 100=68.54%, 250=13.76% 00:23:23.378 cpu : usr=38.02%, sys=2.41%, ctx=1110, majf=0, minf=9 00:23:23.378 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:23:23.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 complete : 0=0.0%, 4=87.6%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 issued rwts: total=2158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.378 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.378 filename2: (groupid=0, jobs=1): err= 0: pid=97311: Sat Jul 13 06:10:13 2024 00:23:23.378 read: IOPS=210, BW=842KiB/s (862kB/s)(8464KiB/10049msec) 00:23:23.378 slat (usec): min=3, max=4032, avg=17.17, stdev=123.48 00:23:23.378 clat (msec): min=5, max=155, avg=75.83, stdev=24.66 00:23:23.378 lat (msec): min=5, max=155, avg=75.85, stdev=24.66 00:23:23.378 clat percentiles (msec): 00:23:23.378 | 1.00th=[ 8], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 55], 00:23:23.378 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 79], 00:23:23.378 | 70.00th=[ 86], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 117], 00:23:23.378 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:23:23.378 | 99.99th=[ 157] 00:23:23.378 bw ( KiB/s): min= 528, max= 1376, per=4.11%, avg=840.00, stdev=183.05, samples=20 00:23:23.378 iops : min= 132, max= 344, avg=210.00, stdev=45.76, samples=20 00:23:23.378 lat (msec) : 10=2.08%, 20=0.76%, 50=12.38%, 100=67.53%, 250=17.25% 00:23:23.378 cpu : usr=38.09%, sys=2.43%, ctx=1174, majf=0, minf=9 00:23:23.378 IO depths : 1=0.1%, 2=2.0%, 4=7.8%, 8=74.9%, 16=15.2%, 32=0.0%, >=64=0.0% 00:23:23.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 complete : 0=0.0%, 4=89.5%, 8=8.8%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 issued rwts: total=2116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.378 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.378 filename2: (groupid=0, jobs=1): err= 0: pid=97312: Sat Jul 13 06:10:13 2024 00:23:23.378 read: IOPS=216, BW=866KiB/s (886kB/s)(8688KiB/10036msec) 00:23:23.378 slat (usec): min=3, max=8022, avg=20.76, stdev=193.42 00:23:23.378 clat (msec): min=23, max=133, avg=73.82, stdev=20.51 00:23:23.378 lat (msec): min=23, max=133, avg=73.85, stdev=20.51 00:23:23.378 clat percentiles (msec): 00:23:23.378 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 52], 00:23:23.378 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:23:23.378 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 110], 00:23:23.378 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 131], 99.95th=[ 132], 00:23:23.378 | 99.99th=[ 134] 00:23:23.378 bw ( KiB/s): min= 664, max= 976, per=4.22%, avg=862.40, stdev=94.07, samples=20 00:23:23.378 iops : min= 166, max= 244, avg=215.60, stdev=23.52, samples=20 00:23:23.378 lat (msec) : 50=18.65%, 100=68.14%, 250=13.21% 00:23:23.378 cpu : usr=33.77%, sys=2.17%, ctx=988, majf=0, minf=9 00:23:23.378 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:23:23.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.378 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.378 filename2: (groupid=0, jobs=1): err= 0: pid=97313: Sat Jul 13 06:10:13 2024 00:23:23.378 read: IOPS=222, BW=891KiB/s (913kB/s)(8920KiB/10008msec) 00:23:23.378 slat (usec): min=4, max=8035, avg=24.96, stdev=237.36 00:23:23.378 clat (msec): min=8, max=173, avg=71.69, stdev=21.92 00:23:23.378 lat (msec): min=8, max=173, avg=71.71, stdev=21.92 00:23:23.378 clat percentiles (msec): 00:23:23.378 | 1.00th=[ 32], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:23:23.378 | 30.00th=[ 56], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:23:23.378 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 106], 95.00th=[ 111], 00:23:23.378 | 99.00th=[ 121], 99.50th=[ 159], 99.90th=[ 159], 99.95th=[ 174], 00:23:23.378 | 99.99th=[ 174] 00:23:23.378 bw ( KiB/s): min= 632, max= 992, per=4.28%, avg=874.95, stdev=112.42, samples=19 00:23:23.378 iops : min= 158, max= 248, avg=218.74, stdev=28.10, samples=19 00:23:23.378 lat (msec) : 10=0.27%, 20=0.27%, 50=19.01%, 100=68.61%, 250=11.84% 00:23:23.378 cpu : usr=38.56%, sys=2.00%, ctx=1238, majf=0, minf=9 00:23:23.378 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:23.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 issued rwts: total=2230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.378 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.378 filename2: (groupid=0, jobs=1): err= 0: pid=97314: Sat Jul 13 06:10:13 2024 00:23:23.378 read: IOPS=222, BW=889KiB/s (911kB/s)(8912KiB/10020msec) 00:23:23.378 slat (usec): min=4, max=4042, avg=17.29, stdev=85.46 00:23:23.378 clat (msec): min=29, max=131, avg=71.84, stdev=20.48 00:23:23.378 lat (msec): min=29, max=131, avg=71.86, stdev=20.48 00:23:23.378 clat percentiles (msec): 00:23:23.378 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:23:23.378 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:23:23.378 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 106], 95.00th=[ 110], 00:23:23.378 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 132], 00:23:23.378 | 99.99th=[ 132] 00:23:23.378 bw ( KiB/s): min= 712, max= 1024, per=4.33%, avg=884.80, stdev=93.96, samples=20 00:23:23.378 iops : min= 178, max= 256, avg=221.20, stdev=23.49, samples=20 00:23:23.378 lat (msec) : 50=19.48%, 100=68.63%, 250=11.89% 00:23:23.378 cpu : usr=39.80%, sys=2.43%, ctx=1341, majf=0, minf=9 00:23:23.378 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:23.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 issued rwts: total=2228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.378 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.378 filename2: (groupid=0, jobs=1): err= 0: pid=97315: Sat Jul 13 06:10:13 2024 00:23:23.378 read: IOPS=200, BW=802KiB/s (821kB/s)(8036KiB/10023msec) 00:23:23.378 slat (usec): min=3, max=8020, avg=19.91, stdev=199.21 00:23:23.378 clat (msec): min=29, max=154, avg=79.66, stdev=22.97 00:23:23.378 lat (msec): min=29, max=154, avg=79.68, stdev=22.97 00:23:23.378 clat percentiles (msec): 00:23:23.378 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:23:23.378 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 84], 00:23:23.378 | 70.00th=[ 95], 80.00th=[ 101], 90.00th=[ 110], 95.00th=[ 120], 00:23:23.378 | 99.00th=[ 136], 99.50th=[ 150], 99.90th=[ 155], 99.95th=[ 155], 00:23:23.378 | 99.99th=[ 155] 00:23:23.378 bw ( KiB/s): min= 512, max= 976, per=3.90%, avg=796.95, stdev=138.73, samples=20 00:23:23.378 iops : min= 128, max= 244, avg=199.20, stdev=34.69, samples=20 00:23:23.378 lat (msec) : 50=15.03%, 100=64.91%, 250=20.06% 00:23:23.378 cpu : usr=34.75%, sys=2.13%, ctx=1202, majf=0, minf=9 00:23:23.378 IO depths : 1=0.1%, 2=2.3%, 4=9.5%, 8=73.3%, 16=14.9%, 32=0.0%, >=64=0.0% 00:23:23.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 complete : 0=0.0%, 4=89.9%, 8=8.1%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 issued rwts: total=2009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.378 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.378 filename2: (groupid=0, jobs=1): err= 0: pid=97316: Sat Jul 13 06:10:13 2024 00:23:23.378 read: IOPS=206, BW=824KiB/s (844kB/s)(8260KiB/10019msec) 00:23:23.378 slat (usec): min=4, max=12027, avg=37.85, stdev=482.22 00:23:23.378 clat (msec): min=28, max=143, avg=77.38, stdev=20.65 00:23:23.378 lat (msec): min=28, max=143, avg=77.42, stdev=20.64 00:23:23.378 clat percentiles (msec): 00:23:23.378 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 58], 00:23:23.378 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 82], 00:23:23.378 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 109], 00:23:23.378 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 132], 00:23:23.378 | 99.99th=[ 144] 00:23:23.378 bw ( KiB/s): min= 528, max= 1000, per=4.01%, avg=819.65, stdev=125.19, samples=20 00:23:23.378 iops : min= 132, max= 250, avg=204.90, stdev=31.30, samples=20 00:23:23.378 lat (msec) : 50=14.09%, 100=68.91%, 250=17.00% 00:23:23.378 cpu : usr=32.25%, sys=2.13%, ctx=891, majf=0, minf=9 00:23:23.378 IO depths : 1=0.1%, 2=2.1%, 4=8.3%, 8=74.6%, 16=15.0%, 32=0.0%, >=64=0.0% 00:23:23.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 complete : 0=0.0%, 4=89.4%, 8=8.8%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.378 issued rwts: total=2065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.378 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.378 filename2: (groupid=0, jobs=1): err= 0: pid=97317: Sat Jul 13 06:10:13 2024 00:23:23.378 read: IOPS=222, BW=889KiB/s (910kB/s)(8888KiB/10002msec) 00:23:23.378 slat (usec): min=4, max=9026, avg=30.10, stdev=296.13 00:23:23.378 clat (msec): min=3, max=159, avg=71.89, stdev=22.35 00:23:23.378 lat (msec): min=3, max=159, avg=71.92, stdev=22.35 00:23:23.378 clat percentiles (msec): 00:23:23.378 | 1.00th=[ 29], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 51], 00:23:23.378 | 30.00th=[ 57], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 75], 00:23:23.378 | 70.00th=[ 80], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 111], 00:23:23.378 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 159], 00:23:23.378 | 99.99th=[ 161] 00:23:23.379 bw ( KiB/s): min= 640, max= 1000, per=4.25%, avg=869.26, stdev=127.12, samples=19 00:23:23.379 iops : min= 160, max= 250, avg=217.32, stdev=31.78, samples=19 00:23:23.379 lat (msec) : 4=0.27%, 10=0.32%, 20=0.27%, 50=18.90%, 100=66.83% 00:23:23.379 lat (msec) : 250=13.41% 00:23:23.379 cpu : usr=40.45%, sys=2.67%, ctx=1533, majf=0, minf=9 00:23:23.379 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:23.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.379 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.379 issued rwts: total=2222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.379 filename2: (groupid=0, jobs=1): err= 0: pid=97318: Sat Jul 13 06:10:13 2024 00:23:23.379 read: IOPS=207, BW=830KiB/s (850kB/s)(8312KiB/10013msec) 00:23:23.379 slat (usec): min=3, max=8024, avg=28.01, stdev=311.23 00:23:23.379 clat (msec): min=27, max=177, avg=76.90, stdev=23.06 00:23:23.379 lat (msec): min=27, max=177, avg=76.93, stdev=23.07 00:23:23.379 clat percentiles (msec): 00:23:23.379 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 53], 00:23:23.379 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:23:23.379 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 112], 00:23:23.379 | 99.00th=[ 132], 99.50th=[ 167], 99.90th=[ 167], 99.95th=[ 178], 00:23:23.379 | 99.99th=[ 178] 00:23:23.379 bw ( KiB/s): min= 523, max= 1072, per=4.04%, avg=826.95, stdev=160.29, samples=20 00:23:23.379 iops : min= 130, max= 268, avg=206.70, stdev=40.15, samples=20 00:23:23.379 lat (msec) : 50=16.65%, 100=65.30%, 250=18.05% 00:23:23.379 cpu : usr=38.07%, sys=2.20%, ctx=1074, majf=0, minf=9 00:23:23.379 IO depths : 1=0.1%, 2=1.9%, 4=7.8%, 8=75.5%, 16=14.8%, 32=0.0%, >=64=0.0% 00:23:23.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.379 complete : 0=0.0%, 4=89.1%, 8=9.2%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.379 issued rwts: total=2078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.379 00:23:23.379 Run status group 0 (all jobs): 00:23:23.379 READ: bw=20.0MiB/s (20.9MB/s), 795KiB/s-892KiB/s (814kB/s-913kB/s), io=201MiB (210MB), run=10002-10051msec 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 bdev_null0 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 [2024-07-13 06:10:13.578283] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 bdev_null1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.379 { 00:23:23.379 "params": { 00:23:23.379 "name": "Nvme$subsystem", 00:23:23.379 "trtype": "$TEST_TRANSPORT", 00:23:23.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.379 "adrfam": "ipv4", 00:23:23.379 "trsvcid": "$NVMF_PORT", 00:23:23.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.379 "hdgst": ${hdgst:-false}, 00:23:23.379 "ddgst": ${ddgst:-false} 00:23:23.379 }, 00:23:23.379 "method": "bdev_nvme_attach_controller" 00:23:23.379 } 00:23:23.379 EOF 00:23:23.379 )") 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:23.379 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.380 { 00:23:23.380 "params": { 00:23:23.380 "name": "Nvme$subsystem", 00:23:23.380 "trtype": "$TEST_TRANSPORT", 00:23:23.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.380 "adrfam": "ipv4", 00:23:23.380 "trsvcid": "$NVMF_PORT", 00:23:23.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.380 "hdgst": ${hdgst:-false}, 00:23:23.380 "ddgst": ${ddgst:-false} 00:23:23.380 }, 00:23:23.380 "method": "bdev_nvme_attach_controller" 00:23:23.380 } 00:23:23.380 EOF 00:23:23.380 )") 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:23.380 "params": { 00:23:23.380 "name": "Nvme0", 00:23:23.380 "trtype": "tcp", 00:23:23.380 "traddr": "10.0.0.2", 00:23:23.380 "adrfam": "ipv4", 00:23:23.380 "trsvcid": "4420", 00:23:23.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:23.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:23.380 "hdgst": false, 00:23:23.380 "ddgst": false 00:23:23.380 }, 00:23:23.380 "method": "bdev_nvme_attach_controller" 00:23:23.380 },{ 00:23:23.380 "params": { 00:23:23.380 "name": "Nvme1", 00:23:23.380 "trtype": "tcp", 00:23:23.380 "traddr": "10.0.0.2", 00:23:23.380 "adrfam": "ipv4", 00:23:23.380 "trsvcid": "4420", 00:23:23.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:23.380 "hdgst": false, 00:23:23.380 "ddgst": false 00:23:23.380 }, 00:23:23.380 "method": "bdev_nvme_attach_controller" 00:23:23.380 }' 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:23.380 06:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:23.380 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:23.380 ... 00:23:23.380 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:23.380 ... 00:23:23.380 fio-3.35 00:23:23.380 Starting 4 threads 00:23:28.653 00:23:28.653 filename0: (groupid=0, jobs=1): err= 0: pid=97465: Sat Jul 13 06:10:19 2024 00:23:28.653 read: IOPS=2061, BW=16.1MiB/s (16.9MB/s)(80.5MiB/5001msec) 00:23:28.653 slat (nsec): min=7745, max=98950, avg=14095.49, stdev=4658.00 00:23:28.653 clat (usec): min=983, max=7267, avg=3832.77, stdev=639.03 00:23:28.653 lat (usec): min=992, max=7288, avg=3846.86, stdev=639.72 00:23:28.653 clat percentiles (usec): 00:23:28.653 | 1.00th=[ 1450], 5.00th=[ 2278], 10.00th=[ 2966], 20.00th=[ 3884], 00:23:28.653 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 3982], 60.00th=[ 4015], 00:23:28.653 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4228], 95.00th=[ 4490], 00:23:28.653 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 5145], 99.95th=[ 5211], 00:23:28.653 | 99.99th=[ 6915] 00:23:28.653 bw ( KiB/s): min=15680, max=18688, per=26.15%, avg=16526.22, stdev=1129.03, samples=9 00:23:28.653 iops : min= 1960, max= 2336, avg=2065.78, stdev=141.13, samples=9 00:23:28.653 lat (usec) : 1000=0.03% 00:23:28.653 lat (msec) : 2=3.39%, 4=49.12%, 10=47.47% 00:23:28.653 cpu : usr=90.86%, sys=8.26%, ctx=25, majf=0, minf=0 00:23:28.653 IO depths : 1=0.1%, 2=18.0%, 4=54.4%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.653 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.653 issued rwts: total=10308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.653 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:28.653 filename0: (groupid=0, jobs=1): err= 0: pid=97466: Sat Jul 13 06:10:19 2024 00:23:28.653 read: IOPS=1895, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5002msec) 00:23:28.653 slat (nsec): min=5302, max=59747, avg=12578.11, stdev=4552.76 00:23:28.653 clat (usec): min=979, max=6676, avg=4170.36, stdev=505.86 00:23:28.653 lat (usec): min=988, max=6695, avg=4182.94, stdev=506.54 00:23:28.653 clat percentiles (usec): 00:23:28.653 | 1.00th=[ 2999], 5.00th=[ 3916], 10.00th=[ 3949], 20.00th=[ 3949], 00:23:28.653 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4080], 00:23:28.653 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4686], 95.00th=[ 5145], 00:23:28.653 | 99.00th=[ 6259], 99.50th=[ 6325], 99.90th=[ 6521], 99.95th=[ 6587], 00:23:28.653 | 99.99th=[ 6652] 00:23:28.653 bw ( KiB/s): min=13328, max=15872, per=23.76%, avg=15018.67, stdev=972.49, samples=9 00:23:28.653 iops : min= 1666, max= 1984, avg=1877.33, stdev=121.56, samples=9 00:23:28.653 lat (usec) : 1000=0.05% 00:23:28.653 lat (msec) : 2=0.52%, 4=36.70%, 10=62.73% 00:23:28.653 cpu : usr=91.28%, sys=7.90%, ctx=13, majf=0, minf=9 00:23:28.653 IO depths : 1=0.1%, 2=24.5%, 4=50.4%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.653 complete : 0=0.0%, 4=90.2%, 8=9.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.653 issued rwts: total=9479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.653 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:28.653 filename1: (groupid=0, jobs=1): err= 0: pid=97467: Sat Jul 13 06:10:19 2024 00:23:28.653 read: IOPS=1999, BW=15.6MiB/s (16.4MB/s)(78.1MiB/5001msec) 00:23:28.653 slat (nsec): min=7907, max=72295, avg=15938.73, stdev=4373.55 00:23:28.653 clat (usec): min=1080, max=7192, avg=3941.43, stdev=509.97 00:23:28.653 lat (usec): min=1089, max=7207, avg=3957.37, stdev=509.99 00:23:28.654 clat percentiles (usec): 00:23:28.654 | 1.00th=[ 2212], 5.00th=[ 2638], 10.00th=[ 3490], 20.00th=[ 3916], 00:23:28.654 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:23:28.654 | 70.00th=[ 4080], 80.00th=[ 4146], 90.00th=[ 4293], 95.00th=[ 4621], 00:23:28.654 | 99.00th=[ 5014], 99.50th=[ 5211], 99.90th=[ 5604], 99.95th=[ 5669], 00:23:28.654 | 99.99th=[ 5735] 00:23:28.654 bw ( KiB/s): min=14976, max=17952, per=25.48%, avg=16099.56, stdev=818.42, samples=9 00:23:28.654 iops : min= 1872, max= 2244, avg=2012.44, stdev=102.30, samples=9 00:23:28.654 lat (msec) : 2=0.70%, 4=48.93%, 10=50.37% 00:23:28.654 cpu : usr=91.92%, sys=7.26%, ctx=8, majf=0, minf=9 00:23:28.654 IO depths : 1=0.1%, 2=20.5%, 4=52.9%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.654 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.654 issued rwts: total=10000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.654 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:28.654 filename1: (groupid=0, jobs=1): err= 0: pid=97468: Sat Jul 13 06:10:19 2024 00:23:28.654 read: IOPS=1944, BW=15.2MiB/s (15.9MB/s)(76.0MiB/5001msec) 00:23:28.654 slat (usec): min=7, max=128, avg=16.17, stdev= 4.37 00:23:28.654 clat (usec): min=1237, max=7159, avg=4051.16, stdev=423.06 00:23:28.654 lat (usec): min=1248, max=7172, avg=4067.32, stdev=422.96 00:23:28.654 clat percentiles (usec): 00:23:28.654 | 1.00th=[ 2606], 5.00th=[ 3490], 10.00th=[ 3884], 20.00th=[ 3916], 00:23:28.654 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:23:28.654 | 70.00th=[ 4080], 80.00th=[ 4178], 90.00th=[ 4555], 95.00th=[ 4752], 00:23:28.654 | 99.00th=[ 5276], 99.50th=[ 5473], 99.90th=[ 5669], 99.95th=[ 5669], 00:23:28.654 | 99.99th=[ 7177] 00:23:28.654 bw ( KiB/s): min=13824, max=16608, per=24.66%, avg=15585.44, stdev=828.54, samples=9 00:23:28.654 iops : min= 1728, max= 2076, avg=1948.11, stdev=103.65, samples=9 00:23:28.654 lat (msec) : 2=0.48%, 4=45.17%, 10=54.34% 00:23:28.654 cpu : usr=90.82%, sys=8.02%, ctx=27, majf=0, minf=9 00:23:28.654 IO depths : 1=0.1%, 2=22.8%, 4=51.6%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.654 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.654 issued rwts: total=9725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.654 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:28.654 00:23:28.654 Run status group 0 (all jobs): 00:23:28.654 READ: bw=61.7MiB/s (64.7MB/s), 14.8MiB/s-16.1MiB/s (15.5MB/s-16.9MB/s), io=309MiB (324MB), run=5001-5002msec 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:28.654 ************************************ 00:23:28.654 END TEST fio_dif_rand_params 00:23:28.654 ************************************ 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.654 00:23:28.654 real 0m23.008s 00:23:28.654 user 2m2.437s 00:23:28.654 sys 0m8.923s 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:28.654 06:10:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:28.654 06:10:19 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:28.654 06:10:19 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:28.654 06:10:19 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:28.654 06:10:19 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:28.654 06:10:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:28.654 ************************************ 00:23:28.654 START TEST fio_dif_digest 00:23:28.654 ************************************ 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:28.654 bdev_null0 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:28.654 [2024-07-13 06:10:19.618474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:23:28.654 06:10:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.654 { 00:23:28.654 "params": { 00:23:28.654 "name": "Nvme$subsystem", 00:23:28.654 "trtype": "$TEST_TRANSPORT", 00:23:28.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.654 "adrfam": "ipv4", 00:23:28.654 "trsvcid": "$NVMF_PORT", 00:23:28.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.654 "hdgst": ${hdgst:-false}, 00:23:28.654 "ddgst": ${ddgst:-false} 00:23:28.654 }, 00:23:28.654 "method": "bdev_nvme_attach_controller" 00:23:28.654 } 00:23:28.654 EOF 00:23:28.654 )") 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:28.655 "params": { 00:23:28.655 "name": "Nvme0", 00:23:28.655 "trtype": "tcp", 00:23:28.655 "traddr": "10.0.0.2", 00:23:28.655 "adrfam": "ipv4", 00:23:28.655 "trsvcid": "4420", 00:23:28.655 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:28.655 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:28.655 "hdgst": true, 00:23:28.655 "ddgst": true 00:23:28.655 }, 00:23:28.655 "method": "bdev_nvme_attach_controller" 00:23:28.655 }' 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:28.655 06:10:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:28.655 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:28.655 ... 00:23:28.655 fio-3.35 00:23:28.655 Starting 3 threads 00:23:38.647 00:23:38.647 filename0: (groupid=0, jobs=1): err= 0: pid=97574: Sat Jul 13 06:10:30 2024 00:23:38.647 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(259MiB/10001msec) 00:23:38.647 slat (nsec): min=7417, max=69452, avg=17644.75, stdev=6023.25 00:23:38.647 clat (usec): min=13544, max=16971, avg=14433.85, stdev=541.77 00:23:38.647 lat (usec): min=13559, max=17004, avg=14451.49, stdev=542.24 00:23:38.647 clat percentiles (usec): 00:23:38.647 | 1.00th=[13566], 5.00th=[13698], 10.00th=[13698], 20.00th=[13829], 00:23:38.647 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:23:38.647 | 70.00th=[14877], 80.00th=[15008], 90.00th=[15139], 95.00th=[15270], 00:23:38.647 | 99.00th=[15533], 99.50th=[15533], 99.90th=[16909], 99.95th=[16909], 00:23:38.647 | 99.99th=[16909] 00:23:38.647 bw ( KiB/s): min=25394, max=27648, per=33.32%, avg=26518.84, stdev=529.06, samples=19 00:23:38.647 iops : min= 198, max= 216, avg=207.16, stdev= 4.18, samples=19 00:23:38.647 lat (msec) : 20=100.00% 00:23:38.647 cpu : usr=91.97%, sys=7.41%, ctx=17, majf=0, minf=0 00:23:38.647 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:38.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.647 issued rwts: total=2073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.647 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:38.647 filename0: (groupid=0, jobs=1): err= 0: pid=97575: Sat Jul 13 06:10:30 2024 00:23:38.647 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(259MiB/10003msec) 00:23:38.647 slat (nsec): min=7896, max=55826, avg=16885.05, stdev=6427.73 00:23:38.647 clat (usec): min=13547, max=18436, avg=14438.03, stdev=554.52 00:23:38.647 lat (usec): min=13561, max=18462, avg=14454.91, stdev=554.95 00:23:38.647 clat percentiles (usec): 00:23:38.647 | 1.00th=[13566], 5.00th=[13698], 10.00th=[13698], 20.00th=[13829], 00:23:38.647 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14484], 60.00th=[14615], 00:23:38.647 | 70.00th=[14877], 80.00th=[15008], 90.00th=[15139], 95.00th=[15270], 00:23:38.647 | 99.00th=[15533], 99.50th=[15664], 99.90th=[18482], 99.95th=[18482], 00:23:38.647 | 99.99th=[18482] 00:23:38.647 bw ( KiB/s): min=25344, max=27648, per=33.32%, avg=26516.21, stdev=535.06, samples=19 00:23:38.647 iops : min= 198, max= 216, avg=207.16, stdev= 4.18, samples=19 00:23:38.647 lat (msec) : 20=100.00% 00:23:38.647 cpu : usr=91.38%, sys=8.00%, ctx=13, majf=0, minf=0 00:23:38.647 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:38.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.647 issued rwts: total=2073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.647 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:38.647 filename0: (groupid=0, jobs=1): err= 0: pid=97576: Sat Jul 13 06:10:30 2024 00:23:38.647 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(259MiB/10001msec) 00:23:38.647 slat (nsec): min=8282, max=72445, avg=17861.58, stdev=6120.02 00:23:38.647 clat (usec): min=13508, max=16542, avg=14431.77, stdev=538.81 00:23:38.647 lat (usec): min=13517, max=16567, avg=14449.63, stdev=539.39 00:23:38.647 clat percentiles (usec): 00:23:38.647 | 1.00th=[13566], 5.00th=[13698], 10.00th=[13698], 20.00th=[13829], 00:23:38.647 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:23:38.647 | 70.00th=[14877], 80.00th=[15008], 90.00th=[15139], 95.00th=[15270], 00:23:38.647 | 99.00th=[15533], 99.50th=[15533], 99.90th=[16581], 99.95th=[16581], 00:23:38.647 | 99.99th=[16581] 00:23:38.647 bw ( KiB/s): min=25344, max=27648, per=33.32%, avg=26516.21, stdev=535.06, samples=19 00:23:38.647 iops : min= 198, max= 216, avg=207.16, stdev= 4.18, samples=19 00:23:38.647 lat (msec) : 20=100.00% 00:23:38.647 cpu : usr=90.44%, sys=8.63%, ctx=101, majf=0, minf=0 00:23:38.647 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:38.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.647 issued rwts: total=2073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.647 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:38.647 00:23:38.647 Run status group 0 (all jobs): 00:23:38.647 READ: bw=77.7MiB/s (81.5MB/s), 25.9MiB/s-25.9MiB/s (27.2MB/s-27.2MB/s), io=777MiB (815MB), run=10001-10003msec 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:38.906 ************************************ 00:23:38.906 END TEST fio_dif_digest 00:23:38.906 ************************************ 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.906 00:23:38.906 real 0m10.884s 00:23:38.906 user 0m27.945s 00:23:38.906 sys 0m2.636s 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:38.906 06:10:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:38.906 06:10:30 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:38.906 06:10:30 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:38.906 06:10:30 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:38.906 06:10:30 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:38.906 06:10:30 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:23:38.906 06:10:30 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:38.906 06:10:30 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:23:38.906 06:10:30 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:38.906 06:10:30 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:38.906 rmmod nvme_tcp 00:23:38.906 rmmod nvme_fabrics 00:23:38.906 rmmod nvme_keyring 00:23:38.906 06:10:30 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:39.164 06:10:30 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:23:39.164 06:10:30 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:23:39.164 06:10:30 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 96834 ']' 00:23:39.164 06:10:30 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 96834 00:23:39.164 06:10:30 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 96834 ']' 00:23:39.164 06:10:30 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 96834 00:23:39.164 06:10:30 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:23:39.164 06:10:30 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:39.164 06:10:30 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96834 00:23:39.164 06:10:30 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:39.164 06:10:30 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:39.164 06:10:30 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96834' 00:23:39.164 killing process with pid 96834 00:23:39.164 06:10:30 nvmf_dif -- common/autotest_common.sh@967 -- # kill 96834 00:23:39.164 06:10:30 nvmf_dif -- common/autotest_common.sh@972 -- # wait 96834 00:23:39.164 06:10:30 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:39.164 06:10:30 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:39.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:39.730 Waiting for block devices as requested 00:23:39.730 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:39.730 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:39.730 06:10:31 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:39.730 06:10:31 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:39.730 06:10:31 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:39.730 06:10:31 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:39.730 06:10:31 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.730 06:10:31 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:39.730 06:10:31 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.989 06:10:31 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:39.989 ************************************ 00:23:39.989 END TEST nvmf_dif 00:23:39.989 ************************************ 00:23:39.989 00:23:39.989 real 0m58.195s 00:23:39.989 user 3m44.507s 00:23:39.989 sys 0m19.933s 00:23:39.989 06:10:31 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:39.989 06:10:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:39.989 06:10:31 -- common/autotest_common.sh@1142 -- # return 0 00:23:39.989 06:10:31 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:39.989 06:10:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:39.989 06:10:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:39.989 06:10:31 -- common/autotest_common.sh@10 -- # set +x 00:23:39.989 ************************************ 00:23:39.989 START TEST nvmf_abort_qd_sizes 00:23:39.989 ************************************ 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:39.989 * Looking for test storage... 00:23:39.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.989 06:10:31 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:39.990 Cannot find device "nvmf_tgt_br" 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:23:39.990 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:40.248 Cannot find device "nvmf_tgt_br2" 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:40.248 Cannot find device "nvmf_tgt_br" 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:40.248 Cannot find device "nvmf_tgt_br2" 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:40.248 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:40.248 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:40.248 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:40.507 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:40.507 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:40.507 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:40.507 06:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:40.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:23:40.507 00:23:40.507 --- 10.0.0.2 ping statistics --- 00:23:40.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.507 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:23:40.507 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:40.507 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:40.507 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:23:40.507 00:23:40.507 --- 10.0.0.3 ping statistics --- 00:23:40.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.507 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:23:40.507 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:40.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:23:40.507 00:23:40.507 --- 10.0.0.1 ping statistics --- 00:23:40.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.507 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:23:40.507 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.507 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:23:40.507 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:40.507 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:41.075 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:41.075 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:41.333 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:41.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=98163 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 98163 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 98163 ']' 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.333 06:10:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:41.333 [2024-07-13 06:10:32.986170] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:41.333 [2024-07-13 06:10:32.986292] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.591 [2024-07-13 06:10:33.128879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:41.591 [2024-07-13 06:10:33.178343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.591 [2024-07-13 06:10:33.178674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.591 [2024-07-13 06:10:33.178873] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.591 [2024-07-13 06:10:33.179175] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.591 [2024-07-13 06:10:33.179225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.591 [2024-07-13 06:10:33.179544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.591 [2024-07-13 06:10:33.182431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.591 [2024-07-13 06:10:33.182547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.591 [2024-07-13 06:10:33.182555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.591 [2024-07-13 06:10:33.215888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:41.591 06:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.591 06:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:23:41.591 06:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.591 06:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:41.591 06:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:41.591 06:10:33 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.591 06:10:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:41.591 06:10:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:41.591 06:10:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:41.592 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:23:41.850 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:41.850 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:23:41.850 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:41.850 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:23:41.850 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:41.850 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.851 06:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:41.851 ************************************ 00:23:41.851 START TEST spdk_target_abort 00:23:41.851 ************************************ 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:41.851 spdk_targetn1 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:41.851 [2024-07-13 06:10:33.436799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:41.851 [2024-07-13 06:10:33.472946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:41.851 06:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:45.136 Initializing NVMe Controllers 00:23:45.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:45.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:45.136 Initialization complete. Launching workers. 00:23:45.136 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10487, failed: 0 00:23:45.136 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1088, failed to submit 9399 00:23:45.136 success 787, unsuccess 301, failed 0 00:23:45.136 06:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:45.136 06:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:48.423 Initializing NVMe Controllers 00:23:48.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:48.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:48.423 Initialization complete. Launching workers. 00:23:48.423 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9015, failed: 0 00:23:48.423 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1181, failed to submit 7834 00:23:48.423 success 387, unsuccess 794, failed 0 00:23:48.423 06:10:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:48.423 06:10:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:51.735 Initializing NVMe Controllers 00:23:51.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:51.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:51.735 Initialization complete. Launching workers. 00:23:51.735 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30735, failed: 0 00:23:51.735 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2343, failed to submit 28392 00:23:51.735 success 476, unsuccess 1867, failed 0 00:23:51.735 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:51.735 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.735 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:51.735 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.735 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:51.735 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.735 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:51.994 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.994 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98163 00:23:51.994 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 98163 ']' 00:23:51.994 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 98163 00:23:51.994 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:23:51.994 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:51.994 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98163 00:23:51.994 killing process with pid 98163 00:23:51.994 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:51.994 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:51.994 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98163' 00:23:51.994 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 98163 00:23:51.994 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 98163 00:23:52.252 ************************************ 00:23:52.252 END TEST spdk_target_abort 00:23:52.252 ************************************ 00:23:52.252 00:23:52.252 real 0m10.419s 00:23:52.252 user 0m39.573s 00:23:52.252 sys 0m2.182s 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:52.252 06:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:23:52.252 06:10:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:52.252 06:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:52.252 06:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:52.252 06:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:52.252 ************************************ 00:23:52.252 START TEST kernel_target_abort 00:23:52.252 ************************************ 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:52.252 06:10:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:52.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:52.511 Waiting for block devices as requested 00:23:52.769 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:52.769 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:52.769 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:52.769 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:52.769 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:52.769 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:52.769 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:52.769 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:52.769 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:52.769 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:52.769 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:52.769 No valid GPT data, bailing 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:53.029 No valid GPT data, bailing 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:53.029 No valid GPT data, bailing 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:53.029 No valid GPT data, bailing 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:23:53.029 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce --hostid=d95af516-4532-4483-a837-b3cd72acabce -a 10.0.0.1 -t tcp -s 4420 00:23:53.289 00:23:53.289 Discovery Log Number of Records 2, Generation counter 2 00:23:53.289 =====Discovery Log Entry 0====== 00:23:53.289 trtype: tcp 00:23:53.289 adrfam: ipv4 00:23:53.289 subtype: current discovery subsystem 00:23:53.289 treq: not specified, sq flow control disable supported 00:23:53.289 portid: 1 00:23:53.289 trsvcid: 4420 00:23:53.289 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:53.289 traddr: 10.0.0.1 00:23:53.289 eflags: none 00:23:53.289 sectype: none 00:23:53.289 =====Discovery Log Entry 1====== 00:23:53.289 trtype: tcp 00:23:53.289 adrfam: ipv4 00:23:53.289 subtype: nvme subsystem 00:23:53.289 treq: not specified, sq flow control disable supported 00:23:53.289 portid: 1 00:23:53.289 trsvcid: 4420 00:23:53.289 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:53.289 traddr: 10.0.0.1 00:23:53.289 eflags: none 00:23:53.289 sectype: none 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:53.289 06:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:56.574 Initializing NVMe Controllers 00:23:56.574 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:56.574 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:56.574 Initialization complete. Launching workers. 00:23:56.574 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30550, failed: 0 00:23:56.574 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30550, failed to submit 0 00:23:56.574 success 0, unsuccess 30550, failed 0 00:23:56.574 06:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:56.574 06:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:59.871 Initializing NVMe Controllers 00:23:59.871 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:59.871 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:59.871 Initialization complete. Launching workers. 00:23:59.871 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64621, failed: 0 00:23:59.871 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27331, failed to submit 37290 00:23:59.871 success 0, unsuccess 27331, failed 0 00:23:59.871 06:10:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:59.871 06:10:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:03.157 Initializing NVMe Controllers 00:24:03.157 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:03.157 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:03.157 Initialization complete. Launching workers. 00:24:03.157 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68873, failed: 0 00:24:03.157 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17186, failed to submit 51687 00:24:03.157 success 0, unsuccess 17186, failed 0 00:24:03.157 06:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:03.157 06:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:03.157 06:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:24:03.157 06:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:03.157 06:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:03.157 06:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:03.157 06:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:03.157 06:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:03.157 06:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:03.157 06:10:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:03.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:04.351 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:04.351 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:04.351 00:24:04.351 real 0m12.039s 00:24:04.351 user 0m6.047s 00:24:04.351 sys 0m3.425s 00:24:04.351 06:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:04.351 06:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:04.351 ************************************ 00:24:04.351 END TEST kernel_target_abort 00:24:04.351 ************************************ 00:24:04.351 06:10:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:24:04.351 06:10:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:04.351 06:10:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:04.351 06:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:04.351 06:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:24:04.351 06:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:04.351 06:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:24:04.351 06:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:04.351 06:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:04.351 rmmod nvme_tcp 00:24:04.351 rmmod nvme_fabrics 00:24:04.351 rmmod nvme_keyring 00:24:04.351 06:10:56 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:04.351 06:10:56 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:24:04.351 06:10:56 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:24:04.351 06:10:56 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 98163 ']' 00:24:04.351 06:10:56 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 98163 00:24:04.351 06:10:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 98163 ']' 00:24:04.351 06:10:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 98163 00:24:04.351 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (98163) - No such process 00:24:04.351 Process with pid 98163 is not found 00:24:04.351 06:10:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 98163 is not found' 00:24:04.351 06:10:56 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:04.351 06:10:56 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:04.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:04.917 Waiting for block devices as requested 00:24:04.917 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:04.917 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:04.917 06:10:56 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:04.917 06:10:56 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:04.917 06:10:56 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:04.917 06:10:56 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:04.918 06:10:56 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.918 06:10:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:04.918 06:10:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.176 06:10:56 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:05.176 00:24:05.176 real 0m25.107s 00:24:05.176 user 0m46.583s 00:24:05.176 sys 0m6.921s 00:24:05.176 06:10:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:05.176 06:10:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:05.176 ************************************ 00:24:05.176 END TEST nvmf_abort_qd_sizes 00:24:05.176 ************************************ 00:24:05.176 06:10:56 -- common/autotest_common.sh@1142 -- # return 0 00:24:05.176 06:10:56 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:05.176 06:10:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:05.176 06:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:05.176 06:10:56 -- common/autotest_common.sh@10 -- # set +x 00:24:05.176 ************************************ 00:24:05.176 START TEST keyring_file 00:24:05.176 ************************************ 00:24:05.176 06:10:56 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:05.176 * Looking for test storage... 00:24:05.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:05.176 06:10:56 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:05.176 06:10:56 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.176 06:10:56 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:05.176 06:10:56 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.176 06:10:56 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.176 06:10:56 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.176 06:10:56 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.176 06:10:56 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.176 06:10:56 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.176 06:10:56 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:05.177 06:10:56 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@47 -- # : 0 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:05.177 06:10:56 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:05.177 06:10:56 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:05.177 06:10:56 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:05.177 06:10:56 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:05.177 06:10:56 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:05.177 06:10:56 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dggNnfAkQY 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dggNnfAkQY 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dggNnfAkQY 00:24:05.177 06:10:56 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.dggNnfAkQY 00:24:05.177 06:10:56 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7jdmzrFCxU 00:24:05.177 06:10:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:05.177 06:10:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:05.436 06:10:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7jdmzrFCxU 00:24:05.436 06:10:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7jdmzrFCxU 00:24:05.436 06:10:56 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.7jdmzrFCxU 00:24:05.436 06:10:56 keyring_file -- keyring/file.sh@30 -- # tgtpid=99011 00:24:05.436 06:10:56 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:05.436 06:10:56 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99011 00:24:05.436 06:10:56 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99011 ']' 00:24:05.436 06:10:56 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.436 06:10:56 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.436 06:10:56 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.436 06:10:56 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.436 06:10:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:05.436 [2024-07-13 06:10:57.009746] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:05.436 [2024-07-13 06:10:57.009865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99011 ] 00:24:05.436 [2024-07-13 06:10:57.150052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.694 [2024-07-13 06:10:57.195152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.694 [2024-07-13 06:10:57.230787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:05.694 06:10:57 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.694 06:10:57 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:05.694 06:10:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:05.694 06:10:57 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.694 06:10:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:05.694 [2024-07-13 06:10:57.368478] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.694 null0 00:24:05.694 [2024-07-13 06:10:57.400426] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:05.694 [2024-07-13 06:10:57.400701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:05.694 [2024-07-13 06:10:57.408409] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:05.694 06:10:57 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.694 06:10:57 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:05.694 06:10:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:05.694 06:10:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:05.694 06:10:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:05.694 06:10:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:05.694 06:10:57 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:05.694 06:10:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:05.694 06:10:57 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:05.694 06:10:57 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.694 06:10:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:05.694 [2024-07-13 06:10:57.420408] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:05.972 request: 00:24:05.972 { 00:24:05.972 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.972 "secure_channel": false, 00:24:05.972 "listen_address": { 00:24:05.972 "trtype": "tcp", 00:24:05.972 "traddr": "127.0.0.1", 00:24:05.972 "trsvcid": "4420" 00:24:05.972 }, 00:24:05.972 "method": "nvmf_subsystem_add_listener", 00:24:05.972 "req_id": 1 00:24:05.972 } 00:24:05.972 Got JSON-RPC error response 00:24:05.972 response: 00:24:05.972 { 00:24:05.972 "code": -32602, 00:24:05.972 "message": "Invalid parameters" 00:24:05.972 } 00:24:05.972 06:10:57 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:05.972 06:10:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:05.972 06:10:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:05.972 06:10:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:05.972 06:10:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:05.972 06:10:57 keyring_file -- keyring/file.sh@46 -- # bperfpid=99020 00:24:05.972 06:10:57 keyring_file -- keyring/file.sh@48 -- # waitforlisten 99020 /var/tmp/bperf.sock 00:24:05.972 06:10:57 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:05.972 06:10:57 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99020 ']' 00:24:05.972 06:10:57 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:05.972 06:10:57 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.972 06:10:57 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:05.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:05.972 06:10:57 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.972 06:10:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:05.972 [2024-07-13 06:10:57.483348] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:05.972 [2024-07-13 06:10:57.483476] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99020 ] 00:24:05.972 [2024-07-13 06:10:57.623879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.972 [2024-07-13 06:10:57.668005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.250 [2024-07-13 06:10:57.701087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:06.250 06:10:57 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.250 06:10:57 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:06.250 06:10:57 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dggNnfAkQY 00:24:06.250 06:10:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dggNnfAkQY 00:24:06.516 06:10:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.7jdmzrFCxU 00:24:06.516 06:10:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.7jdmzrFCxU 00:24:06.775 06:10:58 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:24:06.775 06:10:58 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:24:06.775 06:10:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:06.775 06:10:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:06.775 06:10:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:07.034 06:10:58 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.dggNnfAkQY == \/\t\m\p\/\t\m\p\.\d\g\g\N\n\f\A\k\Q\Y ]] 00:24:07.034 06:10:58 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:24:07.034 06:10:58 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:07.034 06:10:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:07.034 06:10:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:07.034 06:10:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:07.293 06:10:58 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.7jdmzrFCxU == \/\t\m\p\/\t\m\p\.\7\j\d\m\z\r\F\C\x\U ]] 00:24:07.293 06:10:58 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:24:07.293 06:10:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:07.293 06:10:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:07.293 06:10:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:07.293 06:10:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:07.293 06:10:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:07.553 06:10:59 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:24:07.553 06:10:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:24:07.553 06:10:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:07.553 06:10:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:07.553 06:10:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:07.553 06:10:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:07.553 06:10:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:07.812 06:10:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:07.813 06:10:59 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:07.813 06:10:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:08.072 [2024-07-13 06:10:59.635054] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.072 nvme0n1 00:24:08.072 06:10:59 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:24:08.072 06:10:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:08.072 06:10:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:08.072 06:10:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:08.072 06:10:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:08.072 06:10:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:08.330 06:11:00 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:24:08.331 06:11:00 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:24:08.331 06:11:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:08.331 06:11:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:08.331 06:11:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:08.331 06:11:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:08.331 06:11:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:08.589 06:11:00 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:24:08.589 06:11:00 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:08.847 Running I/O for 1 seconds... 00:24:09.782 00:24:09.782 Latency(us) 00:24:09.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.782 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:09.782 nvme0n1 : 1.01 10521.19 41.10 0.00 0.00 12118.68 6464.23 21924.77 00:24:09.782 =================================================================================================================== 00:24:09.782 Total : 10521.19 41.10 0.00 0.00 12118.68 6464.23 21924.77 00:24:09.782 0 00:24:09.782 06:11:01 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:09.782 06:11:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:10.041 06:11:01 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:24:10.041 06:11:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:10.041 06:11:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:10.041 06:11:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:10.041 06:11:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:10.041 06:11:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:10.300 06:11:02 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:24:10.300 06:11:02 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:24:10.300 06:11:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:10.300 06:11:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:10.300 06:11:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:10.300 06:11:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:10.300 06:11:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:10.560 06:11:02 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:10.560 06:11:02 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:10.560 06:11:02 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:10.560 06:11:02 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:10.560 06:11:02 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:10.560 06:11:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:10.560 06:11:02 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:10.560 06:11:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:10.560 06:11:02 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:10.560 06:11:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:10.819 [2024-07-13 06:11:02.497040] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:10.820 [2024-07-13 06:11:02.497620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227c310 (107): Transport endpoint is not connected 00:24:10.820 [2024-07-13 06:11:02.498607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227c310 (9): Bad file descriptor 00:24:10.820 [2024-07-13 06:11:02.499605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:10.820 [2024-07-13 06:11:02.499634] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:10.820 [2024-07-13 06:11:02.499645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:10.820 request: 00:24:10.820 { 00:24:10.820 "name": "nvme0", 00:24:10.820 "trtype": "tcp", 00:24:10.820 "traddr": "127.0.0.1", 00:24:10.820 "adrfam": "ipv4", 00:24:10.820 "trsvcid": "4420", 00:24:10.820 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:10.820 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:10.820 "prchk_reftag": false, 00:24:10.820 "prchk_guard": false, 00:24:10.820 "hdgst": false, 00:24:10.820 "ddgst": false, 00:24:10.820 "psk": "key1", 00:24:10.820 "method": "bdev_nvme_attach_controller", 00:24:10.820 "req_id": 1 00:24:10.820 } 00:24:10.820 Got JSON-RPC error response 00:24:10.820 response: 00:24:10.820 { 00:24:10.820 "code": -5, 00:24:10.820 "message": "Input/output error" 00:24:10.820 } 00:24:10.820 06:11:02 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:10.820 06:11:02 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:10.820 06:11:02 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:10.820 06:11:02 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:10.820 06:11:02 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:24:10.820 06:11:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:10.820 06:11:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:10.820 06:11:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:10.820 06:11:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:10.820 06:11:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:11.079 06:11:02 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:24:11.079 06:11:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:24:11.079 06:11:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:11.079 06:11:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:11.079 06:11:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:11.079 06:11:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:11.079 06:11:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:11.337 06:11:03 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:11.337 06:11:03 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:24:11.337 06:11:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:11.594 06:11:03 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:24:11.594 06:11:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:12.158 06:11:03 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:24:12.158 06:11:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:12.158 06:11:03 keyring_file -- keyring/file.sh@77 -- # jq length 00:24:12.158 06:11:03 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:24:12.158 06:11:03 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.dggNnfAkQY 00:24:12.417 06:11:03 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.dggNnfAkQY 00:24:12.417 06:11:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:12.417 06:11:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.dggNnfAkQY 00:24:12.417 06:11:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:12.417 06:11:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.417 06:11:03 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:12.417 06:11:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.417 06:11:03 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dggNnfAkQY 00:24:12.417 06:11:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dggNnfAkQY 00:24:12.676 [2024-07-13 06:11:04.156662] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.dggNnfAkQY': 0100660 00:24:12.676 [2024-07-13 06:11:04.156717] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:12.676 request: 00:24:12.676 { 00:24:12.676 "name": "key0", 00:24:12.676 "path": "/tmp/tmp.dggNnfAkQY", 00:24:12.676 "method": "keyring_file_add_key", 00:24:12.676 "req_id": 1 00:24:12.676 } 00:24:12.676 Got JSON-RPC error response 00:24:12.676 response: 00:24:12.676 { 00:24:12.676 "code": -1, 00:24:12.676 "message": "Operation not permitted" 00:24:12.676 } 00:24:12.676 06:11:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:12.676 06:11:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:12.676 06:11:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:12.676 06:11:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:12.676 06:11:04 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.dggNnfAkQY 00:24:12.676 06:11:04 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dggNnfAkQY 00:24:12.676 06:11:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dggNnfAkQY 00:24:12.935 06:11:04 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.dggNnfAkQY 00:24:12.935 06:11:04 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:24:12.935 06:11:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:12.935 06:11:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:12.935 06:11:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:12.935 06:11:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:12.935 06:11:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:13.194 06:11:04 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:24:13.194 06:11:04 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:13.194 06:11:04 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:13.194 06:11:04 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:13.194 06:11:04 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:13.194 06:11:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:13.194 06:11:04 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:13.194 06:11:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:13.194 06:11:04 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:13.194 06:11:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:13.454 [2024-07-13 06:11:04.988974] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.dggNnfAkQY': No such file or directory 00:24:13.454 [2024-07-13 06:11:04.989050] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:13.454 [2024-07-13 06:11:04.989091] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:13.454 [2024-07-13 06:11:04.989100] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:13.454 [2024-07-13 06:11:04.989108] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:13.454 request: 00:24:13.454 { 00:24:13.454 "name": "nvme0", 00:24:13.454 "trtype": "tcp", 00:24:13.454 "traddr": "127.0.0.1", 00:24:13.454 "adrfam": "ipv4", 00:24:13.454 "trsvcid": "4420", 00:24:13.454 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:13.454 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:13.454 "prchk_reftag": false, 00:24:13.454 "prchk_guard": false, 00:24:13.454 "hdgst": false, 00:24:13.454 "ddgst": false, 00:24:13.454 "psk": "key0", 00:24:13.454 "method": "bdev_nvme_attach_controller", 00:24:13.454 "req_id": 1 00:24:13.454 } 00:24:13.454 Got JSON-RPC error response 00:24:13.454 response: 00:24:13.454 { 00:24:13.454 "code": -19, 00:24:13.454 "message": "No such device" 00:24:13.454 } 00:24:13.454 06:11:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:13.454 06:11:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:13.454 06:11:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:13.454 06:11:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:13.454 06:11:05 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:24:13.454 06:11:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:13.713 06:11:05 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:13.713 06:11:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:13.713 06:11:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:13.713 06:11:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:13.713 06:11:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:13.713 06:11:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:13.713 06:11:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lX2pD6lm2K 00:24:13.713 06:11:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:13.713 06:11:05 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:13.713 06:11:05 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:13.713 06:11:05 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:13.713 06:11:05 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:13.713 06:11:05 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:13.713 06:11:05 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:13.713 06:11:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lX2pD6lm2K 00:24:13.713 06:11:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lX2pD6lm2K 00:24:13.713 06:11:05 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.lX2pD6lm2K 00:24:13.713 06:11:05 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lX2pD6lm2K 00:24:13.713 06:11:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lX2pD6lm2K 00:24:13.973 06:11:05 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:13.973 06:11:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:14.231 nvme0n1 00:24:14.231 06:11:05 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:24:14.231 06:11:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:14.231 06:11:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:14.231 06:11:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:14.231 06:11:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:14.231 06:11:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:14.490 06:11:06 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:24:14.490 06:11:06 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:24:14.490 06:11:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:15.058 06:11:06 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:24:15.058 06:11:06 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:24:15.058 06:11:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:15.058 06:11:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:15.058 06:11:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:15.058 06:11:06 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:24:15.058 06:11:06 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:24:15.058 06:11:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:15.058 06:11:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:15.058 06:11:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:15.058 06:11:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:15.058 06:11:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:15.626 06:11:07 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:24:15.626 06:11:07 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:15.626 06:11:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:15.885 06:11:07 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:24:15.885 06:11:07 keyring_file -- keyring/file.sh@104 -- # jq length 00:24:15.885 06:11:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:16.143 06:11:07 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:24:16.143 06:11:07 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lX2pD6lm2K 00:24:16.143 06:11:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lX2pD6lm2K 00:24:16.402 06:11:07 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.7jdmzrFCxU 00:24:16.402 06:11:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.7jdmzrFCxU 00:24:16.661 06:11:08 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:16.661 06:11:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:16.920 nvme0n1 00:24:16.920 06:11:08 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:24:16.920 06:11:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:17.485 06:11:08 keyring_file -- keyring/file.sh@112 -- # config='{ 00:24:17.485 "subsystems": [ 00:24:17.485 { 00:24:17.485 "subsystem": "keyring", 00:24:17.485 "config": [ 00:24:17.485 { 00:24:17.485 "method": "keyring_file_add_key", 00:24:17.485 "params": { 00:24:17.485 "name": "key0", 00:24:17.485 "path": "/tmp/tmp.lX2pD6lm2K" 00:24:17.485 } 00:24:17.485 }, 00:24:17.485 { 00:24:17.485 "method": "keyring_file_add_key", 00:24:17.485 "params": { 00:24:17.485 "name": "key1", 00:24:17.485 "path": "/tmp/tmp.7jdmzrFCxU" 00:24:17.485 } 00:24:17.485 } 00:24:17.485 ] 00:24:17.485 }, 00:24:17.485 { 00:24:17.485 "subsystem": "iobuf", 00:24:17.485 "config": [ 00:24:17.485 { 00:24:17.485 "method": "iobuf_set_options", 00:24:17.485 "params": { 00:24:17.485 "small_pool_count": 8192, 00:24:17.485 "large_pool_count": 1024, 00:24:17.485 "small_bufsize": 8192, 00:24:17.485 "large_bufsize": 135168 00:24:17.485 } 00:24:17.485 } 00:24:17.485 ] 00:24:17.485 }, 00:24:17.485 { 00:24:17.485 "subsystem": "sock", 00:24:17.485 "config": [ 00:24:17.485 { 00:24:17.485 "method": "sock_set_default_impl", 00:24:17.485 "params": { 00:24:17.485 "impl_name": "uring" 00:24:17.485 } 00:24:17.485 }, 00:24:17.485 { 00:24:17.485 "method": "sock_impl_set_options", 00:24:17.485 "params": { 00:24:17.485 "impl_name": "ssl", 00:24:17.485 "recv_buf_size": 4096, 00:24:17.485 "send_buf_size": 4096, 00:24:17.485 "enable_recv_pipe": true, 00:24:17.485 "enable_quickack": false, 00:24:17.485 "enable_placement_id": 0, 00:24:17.485 "enable_zerocopy_send_server": true, 00:24:17.485 "enable_zerocopy_send_client": false, 00:24:17.485 "zerocopy_threshold": 0, 00:24:17.485 "tls_version": 0, 00:24:17.485 "enable_ktls": false 00:24:17.485 } 00:24:17.485 }, 00:24:17.485 { 00:24:17.485 "method": "sock_impl_set_options", 00:24:17.485 "params": { 00:24:17.485 "impl_name": "posix", 00:24:17.485 "recv_buf_size": 2097152, 00:24:17.485 "send_buf_size": 2097152, 00:24:17.485 "enable_recv_pipe": true, 00:24:17.485 "enable_quickack": false, 00:24:17.485 "enable_placement_id": 0, 00:24:17.485 "enable_zerocopy_send_server": true, 00:24:17.485 "enable_zerocopy_send_client": false, 00:24:17.485 "zerocopy_threshold": 0, 00:24:17.485 "tls_version": 0, 00:24:17.485 "enable_ktls": false 00:24:17.485 } 00:24:17.485 }, 00:24:17.485 { 00:24:17.485 "method": "sock_impl_set_options", 00:24:17.485 "params": { 00:24:17.485 "impl_name": "uring", 00:24:17.485 "recv_buf_size": 2097152, 00:24:17.485 "send_buf_size": 2097152, 00:24:17.485 "enable_recv_pipe": true, 00:24:17.485 "enable_quickack": false, 00:24:17.485 "enable_placement_id": 0, 00:24:17.485 "enable_zerocopy_send_server": false, 00:24:17.485 "enable_zerocopy_send_client": false, 00:24:17.485 "zerocopy_threshold": 0, 00:24:17.485 "tls_version": 0, 00:24:17.485 "enable_ktls": false 00:24:17.485 } 00:24:17.485 } 00:24:17.485 ] 00:24:17.485 }, 00:24:17.485 { 00:24:17.485 "subsystem": "vmd", 00:24:17.486 "config": [] 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "subsystem": "accel", 00:24:17.486 "config": [ 00:24:17.486 { 00:24:17.486 "method": "accel_set_options", 00:24:17.486 "params": { 00:24:17.486 "small_cache_size": 128, 00:24:17.486 "large_cache_size": 16, 00:24:17.486 "task_count": 2048, 00:24:17.486 "sequence_count": 2048, 00:24:17.486 "buf_count": 2048 00:24:17.486 } 00:24:17.486 } 00:24:17.486 ] 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "subsystem": "bdev", 00:24:17.486 "config": [ 00:24:17.486 { 00:24:17.486 "method": "bdev_set_options", 00:24:17.486 "params": { 00:24:17.486 "bdev_io_pool_size": 65535, 00:24:17.486 "bdev_io_cache_size": 256, 00:24:17.486 "bdev_auto_examine": true, 00:24:17.486 "iobuf_small_cache_size": 128, 00:24:17.486 "iobuf_large_cache_size": 16 00:24:17.486 } 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "method": "bdev_raid_set_options", 00:24:17.486 "params": { 00:24:17.486 "process_window_size_kb": 1024 00:24:17.486 } 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "method": "bdev_iscsi_set_options", 00:24:17.486 "params": { 00:24:17.486 "timeout_sec": 30 00:24:17.486 } 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "method": "bdev_nvme_set_options", 00:24:17.486 "params": { 00:24:17.486 "action_on_timeout": "none", 00:24:17.486 "timeout_us": 0, 00:24:17.486 "timeout_admin_us": 0, 00:24:17.486 "keep_alive_timeout_ms": 10000, 00:24:17.486 "arbitration_burst": 0, 00:24:17.486 "low_priority_weight": 0, 00:24:17.486 "medium_priority_weight": 0, 00:24:17.486 "high_priority_weight": 0, 00:24:17.486 "nvme_adminq_poll_period_us": 10000, 00:24:17.486 "nvme_ioq_poll_period_us": 0, 00:24:17.486 "io_queue_requests": 512, 00:24:17.486 "delay_cmd_submit": true, 00:24:17.486 "transport_retry_count": 4, 00:24:17.486 "bdev_retry_count": 3, 00:24:17.486 "transport_ack_timeout": 0, 00:24:17.486 "ctrlr_loss_timeout_sec": 0, 00:24:17.486 "reconnect_delay_sec": 0, 00:24:17.486 "fast_io_fail_timeout_sec": 0, 00:24:17.486 "disable_auto_failback": false, 00:24:17.486 "generate_uuids": false, 00:24:17.486 "transport_tos": 0, 00:24:17.486 "nvme_error_stat": false, 00:24:17.486 "rdma_srq_size": 0, 00:24:17.486 "io_path_stat": false, 00:24:17.486 "allow_accel_sequence": false, 00:24:17.486 "rdma_max_cq_size": 0, 00:24:17.486 "rdma_cm_event_timeout_ms": 0, 00:24:17.486 "dhchap_digests": [ 00:24:17.486 "sha256", 00:24:17.486 "sha384", 00:24:17.486 "sha512" 00:24:17.486 ], 00:24:17.486 "dhchap_dhgroups": [ 00:24:17.486 "null", 00:24:17.486 "ffdhe2048", 00:24:17.486 "ffdhe3072", 00:24:17.486 "ffdhe4096", 00:24:17.486 "ffdhe6144", 00:24:17.486 "ffdhe8192" 00:24:17.486 ] 00:24:17.486 } 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "method": "bdev_nvme_attach_controller", 00:24:17.486 "params": { 00:24:17.486 "name": "nvme0", 00:24:17.486 "trtype": "TCP", 00:24:17.486 "adrfam": "IPv4", 00:24:17.486 "traddr": "127.0.0.1", 00:24:17.486 "trsvcid": "4420", 00:24:17.486 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:17.486 "prchk_reftag": false, 00:24:17.486 "prchk_guard": false, 00:24:17.486 "ctrlr_loss_timeout_sec": 0, 00:24:17.486 "reconnect_delay_sec": 0, 00:24:17.486 "fast_io_fail_timeout_sec": 0, 00:24:17.486 "psk": "key0", 00:24:17.486 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:17.486 "hdgst": false, 00:24:17.486 "ddgst": false 00:24:17.486 } 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "method": "bdev_nvme_set_hotplug", 00:24:17.486 "params": { 00:24:17.486 "period_us": 100000, 00:24:17.486 "enable": false 00:24:17.486 } 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "method": "bdev_wait_for_examine" 00:24:17.486 } 00:24:17.486 ] 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "subsystem": "nbd", 00:24:17.486 "config": [] 00:24:17.486 } 00:24:17.486 ] 00:24:17.486 }' 00:24:17.486 06:11:08 keyring_file -- keyring/file.sh@114 -- # killprocess 99020 00:24:17.486 06:11:08 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99020 ']' 00:24:17.486 06:11:08 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99020 00:24:17.486 06:11:08 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:17.486 06:11:08 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:17.486 06:11:08 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99020 00:24:17.486 killing process with pid 99020 00:24:17.486 Received shutdown signal, test time was about 1.000000 seconds 00:24:17.486 00:24:17.486 Latency(us) 00:24:17.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.486 =================================================================================================================== 00:24:17.486 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.486 06:11:08 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:17.486 06:11:08 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:17.486 06:11:08 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99020' 00:24:17.486 06:11:08 keyring_file -- common/autotest_common.sh@967 -- # kill 99020 00:24:17.486 06:11:08 keyring_file -- common/autotest_common.sh@972 -- # wait 99020 00:24:17.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:17.486 06:11:09 keyring_file -- keyring/file.sh@117 -- # bperfpid=99263 00:24:17.486 06:11:09 keyring_file -- keyring/file.sh@119 -- # waitforlisten 99263 /var/tmp/bperf.sock 00:24:17.486 06:11:09 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99263 ']' 00:24:17.486 06:11:09 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:17.486 06:11:09 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.486 06:11:09 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:17.486 06:11:09 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:17.486 06:11:09 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.486 06:11:09 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:24:17.486 "subsystems": [ 00:24:17.486 { 00:24:17.486 "subsystem": "keyring", 00:24:17.486 "config": [ 00:24:17.486 { 00:24:17.486 "method": "keyring_file_add_key", 00:24:17.486 "params": { 00:24:17.486 "name": "key0", 00:24:17.486 "path": "/tmp/tmp.lX2pD6lm2K" 00:24:17.486 } 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "method": "keyring_file_add_key", 00:24:17.486 "params": { 00:24:17.486 "name": "key1", 00:24:17.486 "path": "/tmp/tmp.7jdmzrFCxU" 00:24:17.486 } 00:24:17.486 } 00:24:17.486 ] 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "subsystem": "iobuf", 00:24:17.486 "config": [ 00:24:17.486 { 00:24:17.486 "method": "iobuf_set_options", 00:24:17.486 "params": { 00:24:17.486 "small_pool_count": 8192, 00:24:17.486 "large_pool_count": 1024, 00:24:17.486 "small_bufsize": 8192, 00:24:17.486 "large_bufsize": 135168 00:24:17.486 } 00:24:17.486 } 00:24:17.486 ] 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "subsystem": "sock", 00:24:17.486 "config": [ 00:24:17.486 { 00:24:17.486 "method": "sock_set_default_impl", 00:24:17.486 "params": { 00:24:17.486 "impl_name": "uring" 00:24:17.486 } 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "method": "sock_impl_set_options", 00:24:17.486 "params": { 00:24:17.486 "impl_name": "ssl", 00:24:17.486 "recv_buf_size": 4096, 00:24:17.486 "send_buf_size": 4096, 00:24:17.486 "enable_recv_pipe": true, 00:24:17.486 "enable_quickack": false, 00:24:17.486 "enable_placement_id": 0, 00:24:17.486 "enable_zerocopy_send_server": true, 00:24:17.486 "enable_zerocopy_send_client": false, 00:24:17.486 "zerocopy_threshold": 0, 00:24:17.486 "tls_version": 0, 00:24:17.486 "enable_ktls": false 00:24:17.486 } 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "method": "sock_impl_set_options", 00:24:17.486 "params": { 00:24:17.486 "impl_name": "posix", 00:24:17.486 "recv_buf_size": 2097152, 00:24:17.486 "send_buf_size": 2097152, 00:24:17.486 "enable_recv_pipe": true, 00:24:17.486 "enable_quickack": false, 00:24:17.486 "enable_placement_id": 0, 00:24:17.486 "enable_zerocopy_send_server": true, 00:24:17.486 "enable_zerocopy_send_client": false, 00:24:17.486 "zerocopy_threshold": 0, 00:24:17.486 "tls_version": 0, 00:24:17.486 "enable_ktls": false 00:24:17.486 } 00:24:17.486 }, 00:24:17.486 { 00:24:17.486 "method": "sock_impl_set_options", 00:24:17.486 "params": { 00:24:17.487 "impl_name": "uring", 00:24:17.487 "recv_buf_size": 2097152, 00:24:17.487 "send_buf_size": 2097152, 00:24:17.487 "enable_recv_pipe": true, 00:24:17.487 "enable_quickack": false, 00:24:17.487 "enable_placement_id": 0, 00:24:17.487 "enable_zerocopy_send_server": false, 00:24:17.487 "enable_zerocopy_send_client": false, 00:24:17.487 "zerocopy_threshold": 0, 00:24:17.487 "tls_version": 0, 00:24:17.487 "enable_ktls": false 00:24:17.487 } 00:24:17.487 } 00:24:17.487 ] 00:24:17.487 }, 00:24:17.487 { 00:24:17.487 "subsystem": "vmd", 00:24:17.487 "config": [] 00:24:17.487 }, 00:24:17.487 { 00:24:17.487 "subsystem": "accel", 00:24:17.487 "config": [ 00:24:17.487 { 00:24:17.487 "method": "accel_set_options", 00:24:17.487 "params": { 00:24:17.487 "small_cache_size": 128, 00:24:17.487 "large_cache_size": 16, 00:24:17.487 "task_count": 2048, 00:24:17.487 "sequence_count": 2048, 00:24:17.487 "buf_count": 2048 00:24:17.487 } 00:24:17.487 } 00:24:17.487 ] 00:24:17.487 }, 00:24:17.487 { 00:24:17.487 "subsystem": "bdev", 00:24:17.487 "config": [ 00:24:17.487 { 00:24:17.487 "method": "bdev_set_options", 00:24:17.487 "params": { 00:24:17.487 "bdev_io_pool_size": 65535, 00:24:17.487 "bdev_io_cache_size": 256, 00:24:17.487 "bdev_auto_examine": true, 00:24:17.487 "iobuf_small_cache_size": 128, 00:24:17.487 "iobuf_large_cache_size": 16 00:24:17.487 } 00:24:17.487 }, 00:24:17.487 { 00:24:17.487 "method": "bdev_raid_set_options", 00:24:17.487 "params": { 00:24:17.487 "process_window_size_kb": 1024 00:24:17.487 } 00:24:17.487 }, 00:24:17.487 { 00:24:17.487 "method": "bdev_iscsi_set_options", 00:24:17.487 "params": { 00:24:17.487 "timeout_sec": 30 00:24:17.487 } 00:24:17.487 }, 00:24:17.487 { 00:24:17.487 "method": "bdev_nvme_set_options", 00:24:17.487 "params": { 00:24:17.487 "action_on_timeout": "none", 00:24:17.487 "timeout_us": 0, 00:24:17.487 "timeout_admin_us": 0, 00:24:17.487 "keep_alive_timeout_ms": 10000, 00:24:17.487 "arbitration_burst": 0, 00:24:17.487 "low_priority_weight": 0, 00:24:17.487 "medium_priority_weight": 0, 00:24:17.487 "high_priority_weight": 0, 00:24:17.487 "nvme_adminq_poll_period_us": 10000, 00:24:17.487 "nvme_ioq_poll_period_us": 0, 00:24:17.487 "io_queue_requests": 512, 00:24:17.487 "delay_cmd_submit": true, 00:24:17.487 "transport_retry_count": 4, 00:24:17.487 "bdev_retry_count": 3, 00:24:17.487 "transport_ack_timeout": 0, 00:24:17.487 "ctrlr_loss_timeout_sec": 0, 00:24:17.487 "reconnect_delay_sec": 0, 00:24:17.487 "fast_io_fail_timeout_sec": 0, 00:24:17.487 "disable_auto_failback": false, 00:24:17.487 "generate_uuids": false, 00:24:17.487 "transport_tos": 0, 00:24:17.487 "nvme_error_stat": false, 00:24:17.487 "rdma_srq_size": 0, 00:24:17.487 "io_path_stat": false, 00:24:17.487 "allow_accel_sequence": false, 00:24:17.487 "rdma_max_cq_size": 0, 00:24:17.487 "rdma_cm_event_timeout_ms": 0, 00:24:17.487 "dhchap_digests": [ 00:24:17.487 "sha256", 00:24:17.487 "sha384", 00:24:17.487 "sha512" 00:24:17.487 ], 00:24:17.487 "dhchap_dhgroups": [ 00:24:17.487 "null", 00:24:17.487 "ffdhe2048", 00:24:17.487 "ffdhe3072", 00:24:17.487 "ffdhe4096", 00:24:17.487 "ffdhe6144", 00:24:17.487 "ffdhe8192" 00:24:17.487 ] 00:24:17.487 } 00:24:17.487 }, 00:24:17.487 { 00:24:17.487 "method": "bdev_nvme_attach_controller", 00:24:17.487 "params": { 00:24:17.487 "name": "nvme0", 00:24:17.487 "trtype": "TCP", 00:24:17.487 "adrfam": "IPv4", 00:24:17.487 "traddr": "127.0.0.1", 00:24:17.487 "trsvcid": "4420", 00:24:17.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:17.487 "prchk_reftag": false, 00:24:17.487 "prchk_guard": false, 00:24:17.487 "ctrlr_loss_timeout_sec": 0, 00:24:17.487 "reconnect_delay_sec": 0, 00:24:17.487 "fast_io_fail_timeout_sec": 0, 00:24:17.487 "psk": "key0", 00:24:17.487 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:17.487 "hdgst": false, 00:24:17.487 "ddgst": false 00:24:17.487 } 00:24:17.487 }, 00:24:17.487 { 00:24:17.487 "method": "bdev_nvme_set_hotplug", 00:24:17.487 "params": { 00:24:17.487 "period_us": 100000, 00:24:17.487 "enable": false 00:24:17.487 } 00:24:17.487 }, 00:24:17.487 { 00:24:17.487 "method": "bdev_wait_for_examine" 00:24:17.487 } 00:24:17.487 ] 00:24:17.487 }, 00:24:17.487 { 00:24:17.487 "subsystem": "nbd", 00:24:17.487 "config": [] 00:24:17.487 } 00:24:17.487 ] 00:24:17.487 }' 00:24:17.487 06:11:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:17.487 [2024-07-13 06:11:09.148097] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:17.487 [2024-07-13 06:11:09.148427] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99263 ] 00:24:17.745 [2024-07-13 06:11:09.281185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.745 [2024-07-13 06:11:09.320612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.745 [2024-07-13 06:11:09.435265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:18.003 [2024-07-13 06:11:09.473391] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:18.570 06:11:10 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.570 06:11:10 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:18.570 06:11:10 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:24:18.570 06:11:10 keyring_file -- keyring/file.sh@120 -- # jq length 00:24:18.570 06:11:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:18.828 06:11:10 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:24:18.828 06:11:10 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:24:18.828 06:11:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:18.828 06:11:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:18.828 06:11:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:18.828 06:11:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:18.828 06:11:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:19.086 06:11:10 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:19.086 06:11:10 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:24:19.086 06:11:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:19.086 06:11:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:19.086 06:11:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:19.086 06:11:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:19.086 06:11:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:19.395 06:11:10 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:24:19.395 06:11:10 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:24:19.395 06:11:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:19.395 06:11:10 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:24:19.653 06:11:11 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:24:19.653 06:11:11 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:19.653 06:11:11 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.lX2pD6lm2K /tmp/tmp.7jdmzrFCxU 00:24:19.653 06:11:11 keyring_file -- keyring/file.sh@20 -- # killprocess 99263 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99263 ']' 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99263 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99263 00:24:19.653 killing process with pid 99263 00:24:19.653 Received shutdown signal, test time was about 1.000000 seconds 00:24:19.653 00:24:19.653 Latency(us) 00:24:19.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.653 =================================================================================================================== 00:24:19.653 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99263' 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@967 -- # kill 99263 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@972 -- # wait 99263 00:24:19.653 06:11:11 keyring_file -- keyring/file.sh@21 -- # killprocess 99011 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99011 ']' 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99011 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.653 06:11:11 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99011 00:24:19.912 killing process with pid 99011 00:24:19.912 06:11:11 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:19.912 06:11:11 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:19.912 06:11:11 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99011' 00:24:19.912 06:11:11 keyring_file -- common/autotest_common.sh@967 -- # kill 99011 00:24:19.912 [2024-07-13 06:11:11.392985] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:19.912 06:11:11 keyring_file -- common/autotest_common.sh@972 -- # wait 99011 00:24:20.171 00:24:20.171 real 0m14.936s 00:24:20.171 user 0m38.787s 00:24:20.171 sys 0m2.909s 00:24:20.171 06:11:11 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:20.171 06:11:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:20.171 ************************************ 00:24:20.171 END TEST keyring_file 00:24:20.171 ************************************ 00:24:20.171 06:11:11 -- common/autotest_common.sh@1142 -- # return 0 00:24:20.171 06:11:11 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:24:20.171 06:11:11 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:20.171 06:11:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:20.171 06:11:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:20.171 06:11:11 -- common/autotest_common.sh@10 -- # set +x 00:24:20.171 ************************************ 00:24:20.171 START TEST keyring_linux 00:24:20.171 ************************************ 00:24:20.171 06:11:11 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:20.171 * Looking for test storage... 00:24:20.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:20.171 06:11:11 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:20.171 06:11:11 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d95af516-4532-4483-a837-b3cd72acabce 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=d95af516-4532-4483-a837-b3cd72acabce 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:20.171 06:11:11 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.171 06:11:11 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.171 06:11:11 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.171 06:11:11 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.171 06:11:11 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.171 06:11:11 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.171 06:11:11 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:20.171 06:11:11 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:20.171 06:11:11 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:20.171 06:11:11 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:20.171 06:11:11 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:20.171 06:11:11 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:20.171 06:11:11 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:20.171 06:11:11 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:20.171 06:11:11 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:20.171 06:11:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:20.171 06:11:11 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:20.171 06:11:11 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:20.171 06:11:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:20.171 06:11:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:20.171 06:11:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:20.171 06:11:11 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:20.171 06:11:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:20.171 /tmp/:spdk-test:key0 00:24:20.171 06:11:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:20.172 06:11:11 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:20.172 06:11:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:20.172 06:11:11 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:20.172 06:11:11 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:20.172 06:11:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:20.172 06:11:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:20.172 06:11:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:20.172 06:11:11 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:20.172 06:11:11 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:20.172 06:11:11 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:20.172 06:11:11 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:20.172 06:11:11 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:20.172 06:11:11 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:20.431 06:11:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:20.431 06:11:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:20.431 /tmp/:spdk-test:key1 00:24:20.431 06:11:11 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=99378 00:24:20.431 06:11:11 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:20.431 06:11:11 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 99378 00:24:20.431 06:11:11 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 99378 ']' 00:24:20.431 06:11:11 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.431 06:11:11 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.431 06:11:11 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.431 06:11:11 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.431 06:11:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:20.431 [2024-07-13 06:11:11.996678] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:20.431 [2024-07-13 06:11:11.996772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99378 ] 00:24:20.431 [2024-07-13 06:11:12.137691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.690 [2024-07-13 06:11:12.176959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.690 [2024-07-13 06:11:12.208250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:21.258 06:11:12 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.258 06:11:12 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:24:21.258 06:11:12 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:21.258 06:11:12 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.258 06:11:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:21.258 [2024-07-13 06:11:12.945968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.258 null0 00:24:21.258 [2024-07-13 06:11:12.977919] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:21.258 [2024-07-13 06:11:12.978170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:21.518 06:11:12 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.518 06:11:12 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:21.518 795852233 00:24:21.518 06:11:13 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:21.518 1036434652 00:24:21.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:21.518 06:11:13 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=99396 00:24:21.518 06:11:13 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:21.518 06:11:13 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 99396 /var/tmp/bperf.sock 00:24:21.518 06:11:13 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 99396 ']' 00:24:21.518 06:11:13 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:21.518 06:11:13 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:21.518 06:11:13 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:21.518 06:11:13 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:21.518 06:11:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:21.518 [2024-07-13 06:11:13.064520] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:21.518 [2024-07-13 06:11:13.064854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99396 ] 00:24:21.518 [2024-07-13 06:11:13.202510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.777 [2024-07-13 06:11:13.248003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.344 06:11:14 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:22.344 06:11:14 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:24:22.344 06:11:14 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:22.344 06:11:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:22.603 06:11:14 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:22.603 06:11:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:22.862 [2024-07-13 06:11:14.532736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:22.862 06:11:14 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:22.862 06:11:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:23.121 [2024-07-13 06:11:14.829673] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:23.379 nvme0n1 00:24:23.379 06:11:14 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:23.379 06:11:14 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:23.379 06:11:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:23.379 06:11:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:23.379 06:11:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:23.379 06:11:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:23.637 06:11:15 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:23.637 06:11:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:23.637 06:11:15 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:23.637 06:11:15 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:23.637 06:11:15 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:23.637 06:11:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:23.637 06:11:15 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:23.895 06:11:15 keyring_linux -- keyring/linux.sh@25 -- # sn=795852233 00:24:23.895 06:11:15 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:23.895 06:11:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:23.895 06:11:15 keyring_linux -- keyring/linux.sh@26 -- # [[ 795852233 == \7\9\5\8\5\2\2\3\3 ]] 00:24:23.895 06:11:15 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 795852233 00:24:23.896 06:11:15 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:23.896 06:11:15 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:23.896 Running I/O for 1 seconds... 00:24:25.270 00:24:25.270 Latency(us) 00:24:25.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.270 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:25.270 nvme0n1 : 1.01 11383.10 44.47 0.00 0.00 11169.39 5838.66 17158.52 00:24:25.270 =================================================================================================================== 00:24:25.270 Total : 11383.10 44.47 0.00 0.00 11169.39 5838.66 17158.52 00:24:25.270 0 00:24:25.270 06:11:16 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:25.270 06:11:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:25.270 06:11:16 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:25.270 06:11:16 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:25.270 06:11:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:25.270 06:11:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:25.270 06:11:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:25.270 06:11:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:25.529 06:11:17 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:25.529 06:11:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:25.529 06:11:17 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:25.529 06:11:17 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:25.529 06:11:17 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:24:25.529 06:11:17 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:25.529 06:11:17 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:25.529 06:11:17 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:25.529 06:11:17 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:25.529 06:11:17 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:25.529 06:11:17 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:25.529 06:11:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:25.787 [2024-07-13 06:11:17.443022] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:25.787 [2024-07-13 06:11:17.443599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e74270 (107): Transport endpoint is not connected 00:24:25.787 [2024-07-13 06:11:17.444573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e74270 (9): Bad file descriptor 00:24:25.787 [2024-07-13 06:11:17.445575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:25.787 [2024-07-13 06:11:17.445621] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:25.787 [2024-07-13 06:11:17.445632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:25.787 request: 00:24:25.787 { 00:24:25.787 "name": "nvme0", 00:24:25.787 "trtype": "tcp", 00:24:25.787 "traddr": "127.0.0.1", 00:24:25.787 "adrfam": "ipv4", 00:24:25.787 "trsvcid": "4420", 00:24:25.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:25.787 "prchk_reftag": false, 00:24:25.787 "prchk_guard": false, 00:24:25.787 "hdgst": false, 00:24:25.787 "ddgst": false, 00:24:25.787 "psk": ":spdk-test:key1", 00:24:25.787 "method": "bdev_nvme_attach_controller", 00:24:25.787 "req_id": 1 00:24:25.787 } 00:24:25.787 Got JSON-RPC error response 00:24:25.787 response: 00:24:25.787 { 00:24:25.787 "code": -5, 00:24:25.787 "message": "Input/output error" 00:24:25.787 } 00:24:25.787 06:11:17 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:24:25.787 06:11:17 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:25.787 06:11:17 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:25.787 06:11:17 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@33 -- # sn=795852233 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 795852233 00:24:25.787 1 links removed 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@33 -- # sn=1036434652 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1036434652 00:24:25.787 1 links removed 00:24:25.787 06:11:17 keyring_linux -- keyring/linux.sh@41 -- # killprocess 99396 00:24:25.787 06:11:17 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 99396 ']' 00:24:25.787 06:11:17 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 99396 00:24:25.787 06:11:17 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:24:25.787 06:11:17 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.787 06:11:17 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99396 00:24:26.046 killing process with pid 99396 00:24:26.046 Received shutdown signal, test time was about 1.000000 seconds 00:24:26.046 00:24:26.046 Latency(us) 00:24:26.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.046 =================================================================================================================== 00:24:26.046 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99396' 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@967 -- # kill 99396 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@972 -- # wait 99396 00:24:26.046 06:11:17 keyring_linux -- keyring/linux.sh@42 -- # killprocess 99378 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 99378 ']' 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 99378 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99378 00:24:26.046 killing process with pid 99378 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99378' 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@967 -- # kill 99378 00:24:26.046 06:11:17 keyring_linux -- common/autotest_common.sh@972 -- # wait 99378 00:24:26.304 ************************************ 00:24:26.304 END TEST keyring_linux 00:24:26.304 ************************************ 00:24:26.304 00:24:26.304 real 0m6.241s 00:24:26.304 user 0m12.451s 00:24:26.304 sys 0m1.456s 00:24:26.304 06:11:17 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:26.304 06:11:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:26.304 06:11:17 -- common/autotest_common.sh@1142 -- # return 0 00:24:26.304 06:11:17 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:24:26.304 06:11:17 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:24:26.304 06:11:17 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:24:26.304 06:11:17 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:24:26.304 06:11:17 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:24:26.304 06:11:17 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:24:26.304 06:11:17 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:24:26.304 06:11:17 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:24:26.304 06:11:17 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:24:26.304 06:11:17 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:24:26.304 06:11:17 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:24:26.304 06:11:17 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:24:26.304 06:11:17 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:24:26.304 06:11:17 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:24:26.304 06:11:17 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:24:26.304 06:11:17 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:24:26.304 06:11:17 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:24:26.304 06:11:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:26.304 06:11:17 -- common/autotest_common.sh@10 -- # set +x 00:24:26.304 06:11:17 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:24:26.304 06:11:17 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:24:26.304 06:11:17 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:24:26.304 06:11:17 -- common/autotest_common.sh@10 -- # set +x 00:24:28.205 INFO: APP EXITING 00:24:28.205 INFO: killing all VMs 00:24:28.205 INFO: killing vhost app 00:24:28.205 INFO: EXIT DONE 00:24:28.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:29.032 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:29.032 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:29.599 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:29.599 Cleaning 00:24:29.599 Removing: /var/run/dpdk/spdk0/config 00:24:29.599 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:29.599 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:29.599 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:29.599 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:29.599 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:29.599 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:29.599 Removing: /var/run/dpdk/spdk1/config 00:24:29.599 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:29.599 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:29.599 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:29.599 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:29.599 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:29.599 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:29.599 Removing: /var/run/dpdk/spdk2/config 00:24:29.857 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:29.857 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:29.857 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:29.857 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:29.857 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:29.857 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:29.857 Removing: /var/run/dpdk/spdk3/config 00:24:29.857 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:29.857 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:29.857 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:29.857 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:29.857 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:29.857 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:29.857 Removing: /var/run/dpdk/spdk4/config 00:24:29.857 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:29.857 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:29.857 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:29.857 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:29.857 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:29.857 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:29.857 Removing: /dev/shm/nvmf_trace.0 00:24:29.857 Removing: /dev/shm/spdk_tgt_trace.pid70841 00:24:29.857 Removing: /var/run/dpdk/spdk0 00:24:29.857 Removing: /var/run/dpdk/spdk1 00:24:29.857 Removing: /var/run/dpdk/spdk2 00:24:29.857 Removing: /var/run/dpdk/spdk3 00:24:29.857 Removing: /var/run/dpdk/spdk4 00:24:29.857 Removing: /var/run/dpdk/spdk_pid70703 00:24:29.857 Removing: /var/run/dpdk/spdk_pid70841 00:24:29.857 Removing: /var/run/dpdk/spdk_pid71022 00:24:29.857 Removing: /var/run/dpdk/spdk_pid71108 00:24:29.857 Removing: /var/run/dpdk/spdk_pid71123 00:24:29.857 Removing: /var/run/dpdk/spdk_pid71238 00:24:29.857 Removing: /var/run/dpdk/spdk_pid71243 00:24:29.857 Removing: /var/run/dpdk/spdk_pid71361 00:24:29.857 Removing: /var/run/dpdk/spdk_pid71546 00:24:29.857 Removing: /var/run/dpdk/spdk_pid71692 00:24:29.857 Removing: /var/run/dpdk/spdk_pid71751 00:24:29.857 Removing: /var/run/dpdk/spdk_pid71827 00:24:29.857 Removing: /var/run/dpdk/spdk_pid71905 00:24:29.857 Removing: /var/run/dpdk/spdk_pid71977 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72015 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72045 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72101 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72177 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72596 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72635 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72679 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72682 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72743 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72752 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72812 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72822 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72862 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72880 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72920 00:24:29.857 Removing: /var/run/dpdk/spdk_pid72930 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73053 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73083 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73154 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73198 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73217 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73281 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73310 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73349 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73379 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73408 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73443 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73477 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73506 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73541 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73578 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73607 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73642 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73676 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73705 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73741 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73770 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73804 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73842 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73874 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73908 00:24:29.857 Removing: /var/run/dpdk/spdk_pid73944 00:24:29.857 Removing: /var/run/dpdk/spdk_pid74008 00:24:29.857 Removing: /var/run/dpdk/spdk_pid74087 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74382 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74401 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74432 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74440 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74461 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74480 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74488 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74509 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74528 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74536 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74557 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74576 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74584 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74604 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74613 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74632 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74642 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74661 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74680 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74690 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74726 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74734 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74769 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74828 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74856 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74860 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74894 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74898 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74906 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74948 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74956 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74990 00:24:30.116 Removing: /var/run/dpdk/spdk_pid74994 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75004 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75013 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75023 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75032 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75036 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75046 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75074 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75101 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75110 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75139 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75148 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75155 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75191 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75202 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75229 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75236 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75244 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75251 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75259 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75261 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75268 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75276 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75350 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75392 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75491 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75524 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75564 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75584 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75595 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75615 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75652 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75662 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75732 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75748 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75792 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75853 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75904 00:24:30.116 Removing: /var/run/dpdk/spdk_pid75929 00:24:30.116 Removing: /var/run/dpdk/spdk_pid76016 00:24:30.116 Removing: /var/run/dpdk/spdk_pid76058 00:24:30.116 Removing: /var/run/dpdk/spdk_pid76096 00:24:30.116 Removing: /var/run/dpdk/spdk_pid76309 00:24:30.116 Removing: /var/run/dpdk/spdk_pid76401 00:24:30.116 Removing: /var/run/dpdk/spdk_pid76425 00:24:30.116 Removing: /var/run/dpdk/spdk_pid76732 00:24:30.116 Removing: /var/run/dpdk/spdk_pid76772 00:24:30.116 Removing: /var/run/dpdk/spdk_pid77053 00:24:30.116 Removing: /var/run/dpdk/spdk_pid77464 00:24:30.116 Removing: /var/run/dpdk/spdk_pid77727 00:24:30.375 Removing: /var/run/dpdk/spdk_pid78446 00:24:30.375 Removing: /var/run/dpdk/spdk_pid79244 00:24:30.375 Removing: /var/run/dpdk/spdk_pid79359 00:24:30.375 Removing: /var/run/dpdk/spdk_pid79422 00:24:30.375 Removing: /var/run/dpdk/spdk_pid80650 00:24:30.375 Removing: /var/run/dpdk/spdk_pid80852 00:24:30.375 Removing: /var/run/dpdk/spdk_pid84114 00:24:30.375 Removing: /var/run/dpdk/spdk_pid84404 00:24:30.375 Removing: /var/run/dpdk/spdk_pid84515 00:24:30.375 Removing: /var/run/dpdk/spdk_pid84636 00:24:30.375 Removing: /var/run/dpdk/spdk_pid84650 00:24:30.375 Removing: /var/run/dpdk/spdk_pid84669 00:24:30.375 Removing: /var/run/dpdk/spdk_pid84685 00:24:30.375 Removing: /var/run/dpdk/spdk_pid84762 00:24:30.375 Removing: /var/run/dpdk/spdk_pid84878 00:24:30.375 Removing: /var/run/dpdk/spdk_pid85013 00:24:30.375 Removing: /var/run/dpdk/spdk_pid85079 00:24:30.375 Removing: /var/run/dpdk/spdk_pid85261 00:24:30.375 Removing: /var/run/dpdk/spdk_pid85331 00:24:30.375 Removing: /var/run/dpdk/spdk_pid85416 00:24:30.375 Removing: /var/run/dpdk/spdk_pid85717 00:24:30.375 Removing: /var/run/dpdk/spdk_pid86056 00:24:30.375 Removing: /var/run/dpdk/spdk_pid86058 00:24:30.375 Removing: /var/run/dpdk/spdk_pid88231 00:24:30.375 Removing: /var/run/dpdk/spdk_pid88233 00:24:30.375 Removing: /var/run/dpdk/spdk_pid88496 00:24:30.375 Removing: /var/run/dpdk/spdk_pid88510 00:24:30.375 Removing: /var/run/dpdk/spdk_pid88529 00:24:30.375 Removing: /var/run/dpdk/spdk_pid88562 00:24:30.375 Removing: /var/run/dpdk/spdk_pid88567 00:24:30.375 Removing: /var/run/dpdk/spdk_pid88657 00:24:30.375 Removing: /var/run/dpdk/spdk_pid88659 00:24:30.375 Removing: /var/run/dpdk/spdk_pid88768 00:24:30.375 Removing: /var/run/dpdk/spdk_pid88770 00:24:30.375 Removing: /var/run/dpdk/spdk_pid88877 00:24:30.375 Removing: /var/run/dpdk/spdk_pid88886 00:24:30.375 Removing: /var/run/dpdk/spdk_pid89270 00:24:30.375 Removing: /var/run/dpdk/spdk_pid89319 00:24:30.375 Removing: /var/run/dpdk/spdk_pid89422 00:24:30.375 Removing: /var/run/dpdk/spdk_pid89499 00:24:30.375 Removing: /var/run/dpdk/spdk_pid89793 00:24:30.375 Removing: /var/run/dpdk/spdk_pid89989 00:24:30.375 Removing: /var/run/dpdk/spdk_pid90356 00:24:30.375 Removing: /var/run/dpdk/spdk_pid90848 00:24:30.375 Removing: /var/run/dpdk/spdk_pid91658 00:24:30.375 Removing: /var/run/dpdk/spdk_pid92228 00:24:30.375 Removing: /var/run/dpdk/spdk_pid92231 00:24:30.375 Removing: /var/run/dpdk/spdk_pid94139 00:24:30.375 Removing: /var/run/dpdk/spdk_pid94192 00:24:30.375 Removing: /var/run/dpdk/spdk_pid94239 00:24:30.375 Removing: /var/run/dpdk/spdk_pid94288 00:24:30.375 Removing: /var/run/dpdk/spdk_pid94388 00:24:30.375 Removing: /var/run/dpdk/spdk_pid94436 00:24:30.375 Removing: /var/run/dpdk/spdk_pid94488 00:24:30.375 Removing: /var/run/dpdk/spdk_pid94531 00:24:30.375 Removing: /var/run/dpdk/spdk_pid94843 00:24:30.375 Removing: /var/run/dpdk/spdk_pid95977 00:24:30.375 Removing: /var/run/dpdk/spdk_pid96118 00:24:30.375 Removing: /var/run/dpdk/spdk_pid96351 00:24:30.375 Removing: /var/run/dpdk/spdk_pid96878 00:24:30.375 Removing: /var/run/dpdk/spdk_pid97037 00:24:30.375 Removing: /var/run/dpdk/spdk_pid97194 00:24:30.375 Removing: /var/run/dpdk/spdk_pid97291 00:24:30.375 Removing: /var/run/dpdk/spdk_pid97454 00:24:30.376 Removing: /var/run/dpdk/spdk_pid97559 00:24:30.376 Removing: /var/run/dpdk/spdk_pid98207 00:24:30.376 Removing: /var/run/dpdk/spdk_pid98242 00:24:30.376 Removing: /var/run/dpdk/spdk_pid98272 00:24:30.376 Removing: /var/run/dpdk/spdk_pid98525 00:24:30.376 Removing: /var/run/dpdk/spdk_pid98559 00:24:30.376 Removing: /var/run/dpdk/spdk_pid98590 00:24:30.376 Removing: /var/run/dpdk/spdk_pid99011 00:24:30.376 Removing: /var/run/dpdk/spdk_pid99020 00:24:30.376 Removing: /var/run/dpdk/spdk_pid99263 00:24:30.376 Removing: /var/run/dpdk/spdk_pid99378 00:24:30.376 Removing: /var/run/dpdk/spdk_pid99396 00:24:30.376 Clean 00:24:30.634 06:11:22 -- common/autotest_common.sh@1451 -- # return 0 00:24:30.634 06:11:22 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:24:30.634 06:11:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:30.634 06:11:22 -- common/autotest_common.sh@10 -- # set +x 00:24:30.634 06:11:22 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:24:30.634 06:11:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:30.634 06:11:22 -- common/autotest_common.sh@10 -- # set +x 00:24:30.634 06:11:22 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:30.634 06:11:22 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:30.634 06:11:22 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:30.634 06:11:22 -- spdk/autotest.sh@391 -- # hash lcov 00:24:30.634 06:11:22 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:24:30.634 06:11:22 -- spdk/autotest.sh@393 -- # hostname 00:24:30.634 06:11:22 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:30.893 geninfo: WARNING: invalid characters removed from testname! 00:25:02.983 06:11:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:03.242 06:11:54 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:06.532 06:11:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:09.822 06:12:01 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:13.107 06:12:04 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:16.390 06:12:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:19.677 06:12:10 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:19.677 06:12:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:19.677 06:12:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:19.677 06:12:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.677 06:12:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.677 06:12:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.677 06:12:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.677 06:12:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.677 06:12:10 -- paths/export.sh@5 -- $ export PATH 00:25:19.677 06:12:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.677 06:12:10 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:19.677 06:12:10 -- common/autobuild_common.sh@444 -- $ date +%s 00:25:19.677 06:12:10 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720851130.XXXXXX 00:25:19.677 06:12:10 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720851130.p7zVLX 00:25:19.677 06:12:10 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:25:19.677 06:12:10 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:25:19.677 06:12:10 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:25:19.677 06:12:10 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:25:19.677 06:12:10 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:19.677 06:12:10 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:19.677 06:12:10 -- common/autobuild_common.sh@460 -- $ get_config_params 00:25:19.677 06:12:10 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:25:19.677 06:12:10 -- common/autotest_common.sh@10 -- $ set +x 00:25:19.677 06:12:10 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:25:19.677 06:12:10 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:25:19.677 06:12:10 -- pm/common@17 -- $ local monitor 00:25:19.677 06:12:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:19.677 06:12:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:19.677 06:12:10 -- pm/common@25 -- $ sleep 1 00:25:19.677 06:12:10 -- pm/common@21 -- $ date +%s 00:25:19.677 06:12:10 -- pm/common@21 -- $ date +%s 00:25:19.677 06:12:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720851130 00:25:19.677 06:12:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720851130 00:25:19.677 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720851130_collect-vmstat.pm.log 00:25:19.678 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720851130_collect-cpu-load.pm.log 00:25:20.614 06:12:11 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:25:20.614 06:12:11 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:25:20.614 06:12:11 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:25:20.614 06:12:11 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:20.614 06:12:11 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:25:20.614 06:12:11 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:20.614 06:12:11 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:20.614 06:12:11 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:20.614 06:12:11 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:20.614 06:12:11 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:20.614 06:12:12 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:20.614 06:12:12 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:20.614 06:12:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:20.614 06:12:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:20.614 06:12:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:20.614 06:12:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:20.614 06:12:12 -- pm/common@44 -- $ pid=101171 00:25:20.614 06:12:12 -- pm/common@50 -- $ kill -TERM 101171 00:25:20.614 06:12:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:20.614 06:12:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:20.614 06:12:12 -- pm/common@44 -- $ pid=101172 00:25:20.614 06:12:12 -- pm/common@50 -- $ kill -TERM 101172 00:25:20.614 + [[ -n 5899 ]] 00:25:20.614 + sudo kill 5899 00:25:20.625 [Pipeline] } 00:25:20.639 [Pipeline] // timeout 00:25:20.644 [Pipeline] } 00:25:20.658 [Pipeline] // stage 00:25:20.664 [Pipeline] } 00:25:20.686 [Pipeline] // catchError 00:25:20.695 [Pipeline] stage 00:25:20.698 [Pipeline] { (Stop VM) 00:25:20.713 [Pipeline] sh 00:25:20.990 + vagrant halt 00:25:26.261 ==> default: Halting domain... 00:25:31.542 [Pipeline] sh 00:25:31.818 + vagrant destroy -f 00:25:35.139 ==> default: Removing domain... 00:25:35.151 [Pipeline] sh 00:25:35.428 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:25:35.437 [Pipeline] } 00:25:35.455 [Pipeline] // stage 00:25:35.461 [Pipeline] } 00:25:35.478 [Pipeline] // dir 00:25:35.483 [Pipeline] } 00:25:35.500 [Pipeline] // wrap 00:25:35.506 [Pipeline] } 00:25:35.521 [Pipeline] // catchError 00:25:35.530 [Pipeline] stage 00:25:35.532 [Pipeline] { (Epilogue) 00:25:35.546 [Pipeline] sh 00:25:35.825 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:42.403 [Pipeline] catchError 00:25:42.405 [Pipeline] { 00:25:42.420 [Pipeline] sh 00:25:42.700 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:42.959 Artifacts sizes are good 00:25:42.968 [Pipeline] } 00:25:42.987 [Pipeline] // catchError 00:25:43.000 [Pipeline] archiveArtifacts 00:25:43.008 Archiving artifacts 00:25:43.184 [Pipeline] cleanWs 00:25:43.203 [WS-CLEANUP] Deleting project workspace... 00:25:43.203 [WS-CLEANUP] Deferred wipeout is used... 00:25:43.230 [WS-CLEANUP] done 00:25:43.232 [Pipeline] } 00:25:43.251 [Pipeline] // stage 00:25:43.256 [Pipeline] } 00:25:43.273 [Pipeline] // node 00:25:43.279 [Pipeline] End of Pipeline 00:25:43.315 Finished: SUCCESS